00:00:00.000 Started by upstream project "autotest-per-patch" build number 132860 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.053 Using shallow fetch with depth 1 00:00:00.053 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.053 > git --version # timeout=10 00:00:00.063 > git --version # 'git version 2.39.2' 00:00:00.063 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.074 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.074 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.240 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.251 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.264 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.264 > git config core.sparsecheckout # timeout=10 00:00:03.275 > git read-tree -mu HEAD # timeout=10 00:00:03.290 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.311 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.311 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.394 [Pipeline] Start of Pipeline 00:00:03.406 [Pipeline] library 00:00:03.408 Loading library shm_lib@master 00:00:03.408 Library shm_lib@master is cached. Copying from home. 00:00:03.422 [Pipeline] node 00:00:03.434 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:03.436 [Pipeline] { 00:00:03.446 [Pipeline] catchError 00:00:03.447 [Pipeline] { 00:00:03.458 [Pipeline] wrap 00:00:03.465 [Pipeline] { 00:00:03.472 [Pipeline] stage 00:00:03.474 [Pipeline] { (Prologue) 00:00:03.490 [Pipeline] echo 00:00:03.491 Node: VM-host-SM0 00:00:03.498 [Pipeline] cleanWs 00:00:03.509 [WS-CLEANUP] Deleting project workspace... 00:00:03.509 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.515 [WS-CLEANUP] done 00:00:03.702 [Pipeline] setCustomBuildProperty 00:00:03.787 [Pipeline] httpRequest 00:00:04.227 [Pipeline] echo 00:00:04.229 Sorcerer 10.211.164.20 is alive 00:00:04.237 [Pipeline] retry 00:00:04.238 [Pipeline] { 00:00:04.249 [Pipeline] httpRequest 00:00:04.253 HttpMethod: GET 00:00:04.253 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.254 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.255 Response Code: HTTP/1.1 200 OK 00:00:04.256 Success: Status code 200 is in the accepted range: 200,404 00:00:04.256 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.546 [Pipeline] } 00:00:04.559 [Pipeline] // retry 00:00:04.566 [Pipeline] sh 00:00:04.847 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.861 [Pipeline] httpRequest 00:00:05.356 [Pipeline] echo 00:00:05.358 Sorcerer 10.211.164.20 is alive 00:00:05.368 [Pipeline] retry 00:00:05.370 [Pipeline] { 00:00:05.396 [Pipeline] httpRequest 00:00:05.400 HttpMethod: GET 00:00:05.401 URL: http://10.211.164.20/packages/spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:00:05.405 Sending request to url: http://10.211.164.20/packages/spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:00:05.411 Response Code: HTTP/1.1 200 OK 00:00:05.413 Success: Status code 200 is in the accepted range: 200,404 00:00:05.414 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:01:15.855 [Pipeline] } 00:01:15.873 [Pipeline] // retry 00:01:15.881 [Pipeline] sh 00:01:16.163 + tar --no-same-owner -xf spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:01:18.709 [Pipeline] sh 00:01:18.988 + git -C spdk log --oneline -n5 00:01:18.988 a393e5e6e [TEST] 00:01:18.988 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:18.988 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:18.988 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:18.988 66289a6db build: use VERSION file for storing version 00:01:19.006 [Pipeline] writeFile 00:01:19.021 [Pipeline] sh 00:01:19.304 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:19.315 [Pipeline] sh 00:01:19.597 + cat autorun-spdk.conf 00:01:19.597 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.597 SPDK_TEST_NVMF=1 00:01:19.597 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.597 SPDK_TEST_USDT=1 00:01:19.597 SPDK_TEST_NVMF_MDNS=1 00:01:19.597 SPDK_RUN_UBSAN=1 00:01:19.597 NET_TYPE=virt 00:01:19.597 SPDK_JSONRPC_GO_CLIENT=1 00:01:19.597 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.604 RUN_NIGHTLY=0 00:01:19.605 [Pipeline] } 00:01:19.619 [Pipeline] // stage 00:01:19.633 [Pipeline] stage 00:01:19.635 [Pipeline] { (Run VM) 00:01:19.648 [Pipeline] sh 00:01:19.928 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:19.928 + echo 'Start stage prepare_nvme.sh' 00:01:19.928 Start stage prepare_nvme.sh 00:01:19.928 + [[ -n 6 ]] 00:01:19.928 + disk_prefix=ex6 00:01:19.928 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:19.928 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:19.928 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:19.928 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.928 ++ SPDK_TEST_NVMF=1 00:01:19.928 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.928 ++ SPDK_TEST_USDT=1 00:01:19.928 ++ SPDK_TEST_NVMF_MDNS=1 00:01:19.928 ++ SPDK_RUN_UBSAN=1 00:01:19.928 ++ NET_TYPE=virt 00:01:19.928 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:19.928 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.928 ++ RUN_NIGHTLY=0 00:01:19.928 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:19.928 + nvme_files=() 00:01:19.928 + declare -A nvme_files 00:01:19.928 + backend_dir=/var/lib/libvirt/images/backends 00:01:19.928 + nvme_files['nvme.img']=5G 00:01:19.928 + nvme_files['nvme-cmb.img']=5G 00:01:19.928 + nvme_files['nvme-multi0.img']=4G 00:01:19.928 + nvme_files['nvme-multi1.img']=4G 00:01:19.928 + nvme_files['nvme-multi2.img']=4G 00:01:19.928 + nvme_files['nvme-openstack.img']=8G 00:01:19.928 + nvme_files['nvme-zns.img']=5G 00:01:19.928 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:19.928 + (( SPDK_TEST_FTL == 1 )) 00:01:19.928 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:19.928 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:19.928 + for nvme in "${!nvme_files[@]}" 00:01:19.928 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:19.928 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.928 + for nvme in "${!nvme_files[@]}" 00:01:19.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:19.929 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.929 + for nvme in "${!nvme_files[@]}" 00:01:19.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:19.929 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:19.929 + for nvme in "${!nvme_files[@]}" 00:01:19.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:19.929 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.929 + for nvme in "${!nvme_files[@]}" 00:01:19.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:19.929 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.929 + for nvme in "${!nvme_files[@]}" 00:01:19.929 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:19.929 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.187 + for nvme in "${!nvme_files[@]}" 00:01:20.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:20.187 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.187 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:20.187 + echo 'End stage prepare_nvme.sh' 00:01:20.187 End stage prepare_nvme.sh 00:01:20.198 [Pipeline] sh 00:01:20.479 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:20.479 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:20.479 00:01:20.479 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:20.479 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:20.479 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:20.479 HELP=0 00:01:20.479 DRY_RUN=0 00:01:20.479 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:20.479 NVME_DISKS_TYPE=nvme,nvme, 00:01:20.479 NVME_AUTO_CREATE=0 00:01:20.479 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:20.479 NVME_CMB=,, 00:01:20.479 NVME_PMR=,, 00:01:20.479 NVME_ZNS=,, 00:01:20.479 NVME_MS=,, 00:01:20.479 NVME_FDP=,, 00:01:20.479 SPDK_VAGRANT_DISTRO=fedora39 00:01:20.479 SPDK_VAGRANT_VMCPU=10 00:01:20.479 SPDK_VAGRANT_VMRAM=12288 00:01:20.479 SPDK_VAGRANT_PROVIDER=libvirt 00:01:20.479 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:20.479 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:20.479 SPDK_OPENSTACK_NETWORK=0 00:01:20.479 VAGRANT_PACKAGE_BOX=0 00:01:20.479 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:20.479 FORCE_DISTRO=true 00:01:20.479 VAGRANT_BOX_VERSION= 00:01:20.479 EXTRA_VAGRANTFILES= 00:01:20.479 NIC_MODEL=e1000 00:01:20.479 00:01:20.479 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:20.479 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:23.762 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.019 ==> default: Creating image (snapshot of base box volume). 00:01:24.277 ==> default: Creating domain with the following settings... 00:01:24.277 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734348089_34149ab6f7d37c6edf74 00:01:24.277 ==> default: -- Domain type: kvm 00:01:24.277 ==> default: -- Cpus: 10 00:01:24.277 ==> default: -- Feature: acpi 00:01:24.277 ==> default: -- Feature: apic 00:01:24.277 ==> default: -- Feature: pae 00:01:24.277 ==> default: -- Memory: 12288M 00:01:24.277 ==> default: -- Memory Backing: hugepages: 00:01:24.277 ==> default: -- Management MAC: 00:01:24.277 ==> default: -- Loader: 00:01:24.277 ==> default: -- Nvram: 00:01:24.277 ==> default: -- Base box: spdk/fedora39 00:01:24.277 ==> default: -- Storage pool: default 00:01:24.277 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734348089_34149ab6f7d37c6edf74.img (20G) 00:01:24.277 ==> default: -- Volume Cache: default 00:01:24.277 ==> default: -- Kernel: 00:01:24.277 ==> default: -- Initrd: 00:01:24.277 ==> default: -- Graphics Type: vnc 00:01:24.277 ==> default: -- Graphics Port: -1 00:01:24.277 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.277 ==> default: -- Graphics Password: Not defined 00:01:24.277 ==> default: -- Video Type: cirrus 00:01:24.277 ==> default: -- Video VRAM: 9216 00:01:24.277 ==> default: -- Sound Type: 00:01:24.277 ==> default: -- Keymap: en-us 00:01:24.277 ==> default: -- TPM Path: 00:01:24.277 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.277 ==> default: -- Command line args: 00:01:24.277 ==> default: -> value=-device, 00:01:24.277 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.277 ==> default: -> value=-drive, 00:01:24.277 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:24.277 ==> default: -> value=-device, 00:01:24.277 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.277 ==> default: -> value=-device, 00:01:24.277 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.277 ==> default: -> value=-drive, 00:01:24.277 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:24.277 ==> default: -> value=-device, 00:01:24.277 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.277 ==> default: -> value=-drive, 00:01:24.277 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:24.277 ==> default: -> value=-device, 00:01:24.277 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.277 ==> default: -> value=-drive, 00:01:24.277 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:24.277 ==> default: -> value=-device, 00:01:24.277 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.277 ==> default: Creating shared folders metadata... 00:01:24.277 ==> default: Starting domain. 00:01:26.197 ==> default: Waiting for domain to get an IP address... 00:01:44.274 ==> default: Waiting for SSH to become available... 00:01:44.275 ==> default: Configuring and enabling network interfaces... 00:01:48.465 default: SSH address: 192.168.121.231:22 00:01:48.465 default: SSH username: vagrant 00:01:48.465 default: SSH auth method: private key 00:01:50.369 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:58.508 ==> default: Mounting SSHFS shared folder... 00:02:00.407 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.407 ==> default: Checking Mount.. 00:02:01.341 ==> default: Folder Successfully Mounted! 00:02:01.341 ==> default: Running provisioner: file... 00:02:02.274 default: ~/.gitconfig => .gitconfig 00:02:02.533 00:02:02.533 SUCCESS! 00:02:02.533 00:02:02.533 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:02.533 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.533 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:02.533 00:02:02.541 [Pipeline] } 00:02:02.553 [Pipeline] // stage 00:02:02.559 [Pipeline] dir 00:02:02.559 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:02.560 [Pipeline] { 00:02:02.570 [Pipeline] catchError 00:02:02.572 [Pipeline] { 00:02:02.584 [Pipeline] sh 00:02:02.860 + vagrant ssh-config --host vagrant 00:02:02.860 + sed -ne /^Host/,$p 00:02:02.860 + tee ssh_conf 00:02:06.209 Host vagrant 00:02:06.209 HostName 192.168.121.231 00:02:06.209 User vagrant 00:02:06.209 Port 22 00:02:06.209 UserKnownHostsFile /dev/null 00:02:06.209 StrictHostKeyChecking no 00:02:06.209 PasswordAuthentication no 00:02:06.209 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:06.209 IdentitiesOnly yes 00:02:06.209 LogLevel FATAL 00:02:06.209 ForwardAgent yes 00:02:06.209 ForwardX11 yes 00:02:06.209 00:02:06.221 [Pipeline] withEnv 00:02:06.223 [Pipeline] { 00:02:06.236 [Pipeline] sh 00:02:06.517 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:06.517 source /etc/os-release 00:02:06.517 [[ -e /image.version ]] && img=$(< /image.version) 00:02:06.517 # Minimal, systemd-like check. 00:02:06.517 if [[ -e /.dockerenv ]]; then 00:02:06.517 # Clear garbage from the node's name: 00:02:06.517 # agt-er_autotest_547-896 -> autotest_547-896 00:02:06.517 # $HOSTNAME is the actual container id 00:02:06.517 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:06.517 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:06.517 # We can assume this is a mount from a host where container is running, 00:02:06.517 # so fetch its hostname to easily identify the target swarm worker. 00:02:06.517 container="$(< /etc/hostname) ($agent)" 00:02:06.517 else 00:02:06.517 # Fallback 00:02:06.517 container=$agent 00:02:06.517 fi 00:02:06.517 fi 00:02:06.517 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:06.517 00:02:06.788 [Pipeline] } 00:02:06.804 [Pipeline] // withEnv 00:02:06.812 [Pipeline] setCustomBuildProperty 00:02:06.827 [Pipeline] stage 00:02:06.829 [Pipeline] { (Tests) 00:02:06.847 [Pipeline] sh 00:02:07.128 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.399 [Pipeline] sh 00:02:07.679 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:07.952 [Pipeline] timeout 00:02:07.952 Timeout set to expire in 1 hr 0 min 00:02:07.954 [Pipeline] { 00:02:07.968 [Pipeline] sh 00:02:08.248 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:08.815 HEAD is now at a393e5e6e [TEST] 00:02:08.827 [Pipeline] sh 00:02:09.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:09.379 [Pipeline] sh 00:02:09.655 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.928 [Pipeline] sh 00:02:10.208 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:10.468 ++ readlink -f spdk_repo 00:02:10.468 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:10.468 + [[ -n /home/vagrant/spdk_repo ]] 00:02:10.468 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:10.468 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:10.468 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:10.468 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:10.468 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:10.468 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:10.468 + cd /home/vagrant/spdk_repo 00:02:10.468 + source /etc/os-release 00:02:10.468 ++ NAME='Fedora Linux' 00:02:10.468 ++ VERSION='39 (Cloud Edition)' 00:02:10.468 ++ ID=fedora 00:02:10.468 ++ VERSION_ID=39 00:02:10.468 ++ VERSION_CODENAME= 00:02:10.468 ++ PLATFORM_ID=platform:f39 00:02:10.468 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:10.468 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:10.468 ++ LOGO=fedora-logo-icon 00:02:10.468 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:10.468 ++ HOME_URL=https://fedoraproject.org/ 00:02:10.468 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:10.468 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:10.468 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:10.468 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:10.468 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:10.468 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:10.468 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:10.468 ++ SUPPORT_END=2024-11-12 00:02:10.468 ++ VARIANT='Cloud Edition' 00:02:10.468 ++ VARIANT_ID=cloud 00:02:10.468 + uname -a 00:02:10.468 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:10.468 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:10.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:10.985 Hugepages 00:02:10.985 node hugesize free / total 00:02:10.985 node0 1048576kB 0 / 0 00:02:10.985 node0 2048kB 0 / 0 00:02:10.985 00:02:10.985 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.985 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.985 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:10.985 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:10.985 + rm -f /tmp/spdk-ld-path 00:02:10.985 + source autorun-spdk.conf 00:02:10.985 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.985 ++ SPDK_TEST_NVMF=1 00:02:10.985 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.985 ++ SPDK_TEST_USDT=1 00:02:10.985 ++ SPDK_TEST_NVMF_MDNS=1 00:02:10.985 ++ SPDK_RUN_UBSAN=1 00:02:10.985 ++ NET_TYPE=virt 00:02:10.985 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:10.985 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.985 ++ RUN_NIGHTLY=0 00:02:10.985 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.985 + [[ -n '' ]] 00:02:10.985 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.985 + for M in /var/spdk/build-*-manifest.txt 00:02:10.985 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.985 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.985 + for M in /var/spdk/build-*-manifest.txt 00:02:10.985 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.985 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.985 + for M in /var/spdk/build-*-manifest.txt 00:02:10.985 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.985 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.985 ++ uname 00:02:10.985 + [[ Linux == \L\i\n\u\x ]] 00:02:10.985 + sudo dmesg -T 00:02:10.985 + sudo dmesg --clear 00:02:10.985 + dmesg_pid=5260 00:02:10.985 + sudo dmesg -Tw 00:02:10.985 + [[ Fedora Linux == FreeBSD ]] 00:02:10.985 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.985 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.985 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.985 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.985 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.985 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.985 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.985 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.985 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.985 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.985 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.985 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.985 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.985 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.985 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.244 11:22:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:11.244 11:22:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.244 11:22:16 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:11.244 11:22:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:11.244 11:22:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.244 11:22:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:11.244 11:22:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:11.244 11:22:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:11.244 11:22:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.244 11:22:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.244 11:22:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.244 11:22:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.244 11:22:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.245 11:22:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.245 11:22:16 -- paths/export.sh@5 -- $ export PATH 00:02:11.245 11:22:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.245 11:22:16 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:11.245 11:22:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:11.245 11:22:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734348136.XXXXXX 00:02:11.245 11:22:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734348136.8nh6TU 00:02:11.245 11:22:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:11.245 11:22:16 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:11.245 11:22:16 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:11.245 11:22:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:11.245 11:22:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.245 11:22:16 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:11.245 11:22:16 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:11.245 11:22:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.245 11:22:16 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:11.245 11:22:16 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:11.245 11:22:16 -- pm/common@17 -- $ local monitor 00:02:11.245 11:22:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.245 11:22:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.245 11:22:16 -- pm/common@25 -- $ sleep 1 00:02:11.245 11:22:16 -- pm/common@21 -- $ date +%s 00:02:11.245 11:22:16 -- pm/common@21 -- $ date +%s 00:02:11.245 11:22:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734348136 00:02:11.245 11:22:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734348136 00:02:11.245 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734348136_collect-cpu-load.pm.log 00:02:11.245 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734348136_collect-vmstat.pm.log 00:02:12.180 11:22:17 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:12.180 11:22:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.180 11:22:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.180 11:22:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.180 11:22:17 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.180 Mon Dec 16 11:22:17 AM UTC 2024 00:02:12.180 11:22:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.180 v25.01-rc1-3-ga393e5e6e 00:02:12.180 11:22:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:12.180 11:22:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.180 11:22:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.180 11:22:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:12.180 11:22:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:12.180 11:22:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.180 ************************************ 00:02:12.180 START TEST ubsan 00:02:12.180 ************************************ 00:02:12.180 11:22:17 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:12.180 using ubsan 00:02:12.180 00:02:12.180 real 0m0.000s 00:02:12.180 user 0m0.000s 00:02:12.180 sys 0m0.000s 00:02:12.180 11:22:17 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:12.180 11:22:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.180 ************************************ 00:02:12.180 END TEST ubsan 00:02:12.180 ************************************ 00:02:12.439 11:22:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:12.439 11:22:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:12.439 11:22:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:12.439 11:22:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:12.439 11:22:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:12.439 11:22:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:12.439 11:22:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:12.439 11:22:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:12.439 11:22:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:12.439 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:12.439 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.005 Using 'verbs' RDMA provider 00:02:28.464 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:40.685 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:40.685 go version go1.21.1 linux/amd64 00:02:40.685 Creating mk/config.mk...done. 00:02:40.685 Creating mk/cc.flags.mk...done. 00:02:40.685 Type 'make' to build. 00:02:40.685 11:22:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:40.685 11:22:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.685 11:22:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.685 11:22:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.685 ************************************ 00:02:40.685 START TEST make 00:02:40.685 ************************************ 00:02:40.685 11:22:45 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:55.564 The Meson build system 00:02:55.564 Version: 1.5.0 00:02:55.564 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:55.564 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:55.564 Build type: native build 00:02:55.564 Program cat found: YES (/usr/bin/cat) 00:02:55.564 Project name: DPDK 00:02:55.564 Project version: 24.03.0 00:02:55.564 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:55.564 C linker for the host machine: cc ld.bfd 2.40-14 00:02:55.564 Host machine cpu family: x86_64 00:02:55.564 Host machine cpu: x86_64 00:02:55.564 Message: ## Building in Developer Mode ## 00:02:55.564 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:55.564 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:55.564 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:55.564 Program python3 found: YES (/usr/bin/python3) 00:02:55.564 Program cat found: YES (/usr/bin/cat) 00:02:55.564 Compiler for C supports arguments -march=native: YES 00:02:55.564 Checking for size of "void *" : 8 00:02:55.564 Checking for size of "void *" : 8 (cached) 00:02:55.564 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:55.564 Library m found: YES 00:02:55.564 Library numa found: YES 00:02:55.564 Has header "numaif.h" : YES 00:02:55.564 Library fdt found: NO 00:02:55.564 Library execinfo found: NO 00:02:55.564 Has header "execinfo.h" : YES 00:02:55.564 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:55.564 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:55.564 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:55.564 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:55.564 Run-time dependency openssl found: YES 3.1.1 00:02:55.564 Run-time dependency libpcap found: YES 1.10.4 00:02:55.564 Has header "pcap.h" with dependency libpcap: YES 00:02:55.564 Compiler for C supports arguments -Wcast-qual: YES 00:02:55.564 Compiler for C supports arguments -Wdeprecated: YES 00:02:55.564 Compiler for C supports arguments -Wformat: YES 00:02:55.564 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:55.564 Compiler for C supports arguments -Wformat-security: NO 00:02:55.564 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:55.564 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:55.564 Compiler for C supports arguments -Wnested-externs: YES 00:02:55.564 Compiler for C supports arguments -Wold-style-definition: YES 00:02:55.564 Compiler for C supports arguments -Wpointer-arith: YES 00:02:55.564 Compiler for C supports arguments -Wsign-compare: YES 00:02:55.564 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:55.564 Compiler for C supports arguments -Wundef: YES 00:02:55.564 Compiler for C supports arguments -Wwrite-strings: YES 00:02:55.564 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:55.564 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:55.564 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:55.564 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:55.564 Program objdump found: YES (/usr/bin/objdump) 00:02:55.564 Compiler for C supports arguments -mavx512f: YES 00:02:55.564 Checking if "AVX512 checking" compiles: YES 00:02:55.564 Fetching value of define "__SSE4_2__" : 1 00:02:55.564 Fetching value of define "__AES__" : 1 00:02:55.564 Fetching value of define "__AVX__" : 1 00:02:55.564 Fetching value of define "__AVX2__" : 1 00:02:55.564 Fetching value of define "__AVX512BW__" : (undefined) 00:02:55.564 Fetching value of define "__AVX512CD__" : (undefined) 00:02:55.564 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:55.564 Fetching value of define "__AVX512F__" : (undefined) 00:02:55.564 Fetching value of define "__AVX512VL__" : (undefined) 00:02:55.564 Fetching value of define "__PCLMUL__" : 1 00:02:55.564 Fetching value of define "__RDRND__" : 1 00:02:55.564 Fetching value of define "__RDSEED__" : 1 00:02:55.564 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:55.564 Fetching value of define "__znver1__" : (undefined) 00:02:55.564 Fetching value of define "__znver2__" : (undefined) 00:02:55.564 Fetching value of define "__znver3__" : (undefined) 00:02:55.564 Fetching value of define "__znver4__" : (undefined) 00:02:55.564 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:55.564 Message: lib/log: Defining dependency "log" 00:02:55.564 Message: lib/kvargs: Defining dependency "kvargs" 00:02:55.564 Message: lib/telemetry: Defining dependency "telemetry" 00:02:55.564 Checking for function "getentropy" : NO 00:02:55.564 Message: lib/eal: Defining dependency "eal" 00:02:55.564 Message: lib/ring: Defining dependency "ring" 00:02:55.564 Message: lib/rcu: Defining dependency "rcu" 00:02:55.564 Message: lib/mempool: Defining dependency "mempool" 00:02:55.564 Message: lib/mbuf: Defining dependency "mbuf" 00:02:55.564 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:55.564 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:55.564 Compiler for C supports arguments -mpclmul: YES 00:02:55.564 Compiler for C supports arguments -maes: YES 00:02:55.564 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:55.564 Compiler for C supports arguments -mavx512bw: YES 00:02:55.564 Compiler for C supports arguments -mavx512dq: YES 00:02:55.564 Compiler for C supports arguments -mavx512vl: YES 00:02:55.564 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:55.564 Compiler for C supports arguments -mavx2: YES 00:02:55.564 Compiler for C supports arguments -mavx: YES 00:02:55.564 Message: lib/net: Defining dependency "net" 00:02:55.564 Message: lib/meter: Defining dependency "meter" 00:02:55.564 Message: lib/ethdev: Defining dependency "ethdev" 00:02:55.564 Message: lib/pci: Defining dependency "pci" 00:02:55.564 Message: lib/cmdline: Defining dependency "cmdline" 00:02:55.564 Message: lib/hash: Defining dependency "hash" 00:02:55.564 Message: lib/timer: Defining dependency "timer" 00:02:55.564 Message: lib/compressdev: Defining dependency "compressdev" 00:02:55.564 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:55.564 Message: lib/dmadev: Defining dependency "dmadev" 00:02:55.564 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:55.564 Message: lib/power: Defining dependency "power" 00:02:55.564 Message: lib/reorder: Defining dependency "reorder" 00:02:55.564 Message: lib/security: Defining dependency "security" 00:02:55.564 Has header "linux/userfaultfd.h" : YES 00:02:55.564 Has header "linux/vduse.h" : YES 00:02:55.564 Message: lib/vhost: Defining dependency "vhost" 00:02:55.564 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:55.564 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:55.564 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:55.564 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:55.564 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:55.564 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:55.565 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:55.565 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:55.565 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:55.565 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:55.565 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:55.565 Configuring doxy-api-html.conf using configuration 00:02:55.565 Configuring doxy-api-man.conf using configuration 00:02:55.565 Program mandb found: YES (/usr/bin/mandb) 00:02:55.565 Program sphinx-build found: NO 00:02:55.565 Configuring rte_build_config.h using configuration 00:02:55.565 Message: 00:02:55.565 ================= 00:02:55.565 Applications Enabled 00:02:55.565 ================= 00:02:55.565 00:02:55.565 apps: 00:02:55.565 00:02:55.565 00:02:55.565 Message: 00:02:55.565 ================= 00:02:55.565 Libraries Enabled 00:02:55.565 ================= 00:02:55.565 00:02:55.565 libs: 00:02:55.565 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:55.565 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:55.565 cryptodev, dmadev, power, reorder, security, vhost, 00:02:55.565 00:02:55.565 Message: 00:02:55.565 =============== 00:02:55.565 Drivers Enabled 00:02:55.565 =============== 00:02:55.565 00:02:55.565 common: 00:02:55.565 00:02:55.565 bus: 00:02:55.565 pci, vdev, 00:02:55.565 mempool: 00:02:55.565 ring, 00:02:55.565 dma: 00:02:55.565 00:02:55.565 net: 00:02:55.565 00:02:55.565 crypto: 00:02:55.565 00:02:55.565 compress: 00:02:55.565 00:02:55.565 vdpa: 00:02:55.565 00:02:55.565 00:02:55.565 Message: 00:02:55.565 ================= 00:02:55.565 Content Skipped 00:02:55.565 ================= 00:02:55.565 00:02:55.565 apps: 00:02:55.565 dumpcap: explicitly disabled via build config 00:02:55.565 graph: explicitly disabled via build config 00:02:55.565 pdump: explicitly disabled via build config 00:02:55.565 proc-info: explicitly disabled via build config 00:02:55.565 test-acl: explicitly disabled via build config 00:02:55.565 test-bbdev: explicitly disabled via build config 00:02:55.565 test-cmdline: explicitly disabled via build config 00:02:55.565 test-compress-perf: explicitly disabled via build config 00:02:55.565 test-crypto-perf: explicitly disabled via build config 00:02:55.565 test-dma-perf: explicitly disabled via build config 00:02:55.565 test-eventdev: explicitly disabled via build config 00:02:55.565 test-fib: explicitly disabled via build config 00:02:55.565 test-flow-perf: explicitly disabled via build config 00:02:55.565 test-gpudev: explicitly disabled via build config 00:02:55.565 test-mldev: explicitly disabled via build config 00:02:55.565 test-pipeline: explicitly disabled via build config 00:02:55.565 test-pmd: explicitly disabled via build config 00:02:55.565 test-regex: explicitly disabled via build config 00:02:55.565 test-sad: explicitly disabled via build config 00:02:55.565 test-security-perf: explicitly disabled via build config 00:02:55.565 00:02:55.565 libs: 00:02:55.565 argparse: explicitly disabled via build config 00:02:55.565 metrics: explicitly disabled via build config 00:02:55.565 acl: explicitly disabled via build config 00:02:55.565 bbdev: explicitly disabled via build config 00:02:55.565 bitratestats: explicitly disabled via build config 00:02:55.565 bpf: explicitly disabled via build config 00:02:55.565 cfgfile: explicitly disabled via build config 00:02:55.565 distributor: explicitly disabled via build config 00:02:55.565 efd: explicitly disabled via build config 00:02:55.565 eventdev: explicitly disabled via build config 00:02:55.565 dispatcher: explicitly disabled via build config 00:02:55.565 gpudev: explicitly disabled via build config 00:02:55.565 gro: explicitly disabled via build config 00:02:55.565 gso: explicitly disabled via build config 00:02:55.565 ip_frag: explicitly disabled via build config 00:02:55.565 jobstats: explicitly disabled via build config 00:02:55.565 latencystats: explicitly disabled via build config 00:02:55.565 lpm: explicitly disabled via build config 00:02:55.565 member: explicitly disabled via build config 00:02:55.565 pcapng: explicitly disabled via build config 00:02:55.565 rawdev: explicitly disabled via build config 00:02:55.565 regexdev: explicitly disabled via build config 00:02:55.565 mldev: explicitly disabled via build config 00:02:55.565 rib: explicitly disabled via build config 00:02:55.565 sched: explicitly disabled via build config 00:02:55.565 stack: explicitly disabled via build config 00:02:55.565 ipsec: explicitly disabled via build config 00:02:55.565 pdcp: explicitly disabled via build config 00:02:55.565 fib: explicitly disabled via build config 00:02:55.565 port: explicitly disabled via build config 00:02:55.565 pdump: explicitly disabled via build config 00:02:55.565 table: explicitly disabled via build config 00:02:55.565 pipeline: explicitly disabled via build config 00:02:55.565 graph: explicitly disabled via build config 00:02:55.565 node: explicitly disabled via build config 00:02:55.565 00:02:55.565 drivers: 00:02:55.565 common/cpt: not in enabled drivers build config 00:02:55.565 common/dpaax: not in enabled drivers build config 00:02:55.565 common/iavf: not in enabled drivers build config 00:02:55.565 common/idpf: not in enabled drivers build config 00:02:55.565 common/ionic: not in enabled drivers build config 00:02:55.565 common/mvep: not in enabled drivers build config 00:02:55.565 common/octeontx: not in enabled drivers build config 00:02:55.565 bus/auxiliary: not in enabled drivers build config 00:02:55.565 bus/cdx: not in enabled drivers build config 00:02:55.565 bus/dpaa: not in enabled drivers build config 00:02:55.565 bus/fslmc: not in enabled drivers build config 00:02:55.565 bus/ifpga: not in enabled drivers build config 00:02:55.565 bus/platform: not in enabled drivers build config 00:02:55.565 bus/uacce: not in enabled drivers build config 00:02:55.565 bus/vmbus: not in enabled drivers build config 00:02:55.565 common/cnxk: not in enabled drivers build config 00:02:55.565 common/mlx5: not in enabled drivers build config 00:02:55.565 common/nfp: not in enabled drivers build config 00:02:55.565 common/nitrox: not in enabled drivers build config 00:02:55.565 common/qat: not in enabled drivers build config 00:02:55.565 common/sfc_efx: not in enabled drivers build config 00:02:55.565 mempool/bucket: not in enabled drivers build config 00:02:55.565 mempool/cnxk: not in enabled drivers build config 00:02:55.565 mempool/dpaa: not in enabled drivers build config 00:02:55.565 mempool/dpaa2: not in enabled drivers build config 00:02:55.565 mempool/octeontx: not in enabled drivers build config 00:02:55.565 mempool/stack: not in enabled drivers build config 00:02:55.565 dma/cnxk: not in enabled drivers build config 00:02:55.565 dma/dpaa: not in enabled drivers build config 00:02:55.565 dma/dpaa2: not in enabled drivers build config 00:02:55.565 dma/hisilicon: not in enabled drivers build config 00:02:55.565 dma/idxd: not in enabled drivers build config 00:02:55.565 dma/ioat: not in enabled drivers build config 00:02:55.565 dma/skeleton: not in enabled drivers build config 00:02:55.565 net/af_packet: not in enabled drivers build config 00:02:55.565 net/af_xdp: not in enabled drivers build config 00:02:55.565 net/ark: not in enabled drivers build config 00:02:55.565 net/atlantic: not in enabled drivers build config 00:02:55.565 net/avp: not in enabled drivers build config 00:02:55.565 net/axgbe: not in enabled drivers build config 00:02:55.565 net/bnx2x: not in enabled drivers build config 00:02:55.565 net/bnxt: not in enabled drivers build config 00:02:55.565 net/bonding: not in enabled drivers build config 00:02:55.565 net/cnxk: not in enabled drivers build config 00:02:55.565 net/cpfl: not in enabled drivers build config 00:02:55.565 net/cxgbe: not in enabled drivers build config 00:02:55.565 net/dpaa: not in enabled drivers build config 00:02:55.565 net/dpaa2: not in enabled drivers build config 00:02:55.565 net/e1000: not in enabled drivers build config 00:02:55.565 net/ena: not in enabled drivers build config 00:02:55.565 net/enetc: not in enabled drivers build config 00:02:55.565 net/enetfec: not in enabled drivers build config 00:02:55.565 net/enic: not in enabled drivers build config 00:02:55.565 net/failsafe: not in enabled drivers build config 00:02:55.565 net/fm10k: not in enabled drivers build config 00:02:55.565 net/gve: not in enabled drivers build config 00:02:55.565 net/hinic: not in enabled drivers build config 00:02:55.565 net/hns3: not in enabled drivers build config 00:02:55.565 net/i40e: not in enabled drivers build config 00:02:55.565 net/iavf: not in enabled drivers build config 00:02:55.565 net/ice: not in enabled drivers build config 00:02:55.565 net/idpf: not in enabled drivers build config 00:02:55.565 net/igc: not in enabled drivers build config 00:02:55.565 net/ionic: not in enabled drivers build config 00:02:55.565 net/ipn3ke: not in enabled drivers build config 00:02:55.565 net/ixgbe: not in enabled drivers build config 00:02:55.565 net/mana: not in enabled drivers build config 00:02:55.565 net/memif: not in enabled drivers build config 00:02:55.565 net/mlx4: not in enabled drivers build config 00:02:55.565 net/mlx5: not in enabled drivers build config 00:02:55.565 net/mvneta: not in enabled drivers build config 00:02:55.565 net/mvpp2: not in enabled drivers build config 00:02:55.565 net/netvsc: not in enabled drivers build config 00:02:55.565 net/nfb: not in enabled drivers build config 00:02:55.565 net/nfp: not in enabled drivers build config 00:02:55.565 net/ngbe: not in enabled drivers build config 00:02:55.565 net/null: not in enabled drivers build config 00:02:55.565 net/octeontx: not in enabled drivers build config 00:02:55.565 net/octeon_ep: not in enabled drivers build config 00:02:55.565 net/pcap: not in enabled drivers build config 00:02:55.565 net/pfe: not in enabled drivers build config 00:02:55.565 net/qede: not in enabled drivers build config 00:02:55.565 net/ring: not in enabled drivers build config 00:02:55.565 net/sfc: not in enabled drivers build config 00:02:55.565 net/softnic: not in enabled drivers build config 00:02:55.565 net/tap: not in enabled drivers build config 00:02:55.565 net/thunderx: not in enabled drivers build config 00:02:55.565 net/txgbe: not in enabled drivers build config 00:02:55.565 net/vdev_netvsc: not in enabled drivers build config 00:02:55.565 net/vhost: not in enabled drivers build config 00:02:55.565 net/virtio: not in enabled drivers build config 00:02:55.566 net/vmxnet3: not in enabled drivers build config 00:02:55.566 raw/*: missing internal dependency, "rawdev" 00:02:55.566 crypto/armv8: not in enabled drivers build config 00:02:55.566 crypto/bcmfs: not in enabled drivers build config 00:02:55.566 crypto/caam_jr: not in enabled drivers build config 00:02:55.566 crypto/ccp: not in enabled drivers build config 00:02:55.566 crypto/cnxk: not in enabled drivers build config 00:02:55.566 crypto/dpaa_sec: not in enabled drivers build config 00:02:55.566 crypto/dpaa2_sec: not in enabled drivers build config 00:02:55.566 crypto/ipsec_mb: not in enabled drivers build config 00:02:55.566 crypto/mlx5: not in enabled drivers build config 00:02:55.566 crypto/mvsam: not in enabled drivers build config 00:02:55.566 crypto/nitrox: not in enabled drivers build config 00:02:55.566 crypto/null: not in enabled drivers build config 00:02:55.566 crypto/octeontx: not in enabled drivers build config 00:02:55.566 crypto/openssl: not in enabled drivers build config 00:02:55.566 crypto/scheduler: not in enabled drivers build config 00:02:55.566 crypto/uadk: not in enabled drivers build config 00:02:55.566 crypto/virtio: not in enabled drivers build config 00:02:55.566 compress/isal: not in enabled drivers build config 00:02:55.566 compress/mlx5: not in enabled drivers build config 00:02:55.566 compress/nitrox: not in enabled drivers build config 00:02:55.566 compress/octeontx: not in enabled drivers build config 00:02:55.566 compress/zlib: not in enabled drivers build config 00:02:55.566 regex/*: missing internal dependency, "regexdev" 00:02:55.566 ml/*: missing internal dependency, "mldev" 00:02:55.566 vdpa/ifc: not in enabled drivers build config 00:02:55.566 vdpa/mlx5: not in enabled drivers build config 00:02:55.566 vdpa/nfp: not in enabled drivers build config 00:02:55.566 vdpa/sfc: not in enabled drivers build config 00:02:55.566 event/*: missing internal dependency, "eventdev" 00:02:55.566 baseband/*: missing internal dependency, "bbdev" 00:02:55.566 gpu/*: missing internal dependency, "gpudev" 00:02:55.566 00:02:55.566 00:02:55.566 Build targets in project: 85 00:02:55.566 00:02:55.566 DPDK 24.03.0 00:02:55.566 00:02:55.566 User defined options 00:02:55.566 buildtype : debug 00:02:55.566 default_library : shared 00:02:55.566 libdir : lib 00:02:55.566 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:55.566 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:55.566 c_link_args : 00:02:55.566 cpu_instruction_set: native 00:02:55.566 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:55.566 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:55.566 enable_docs : false 00:02:55.566 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:55.566 enable_kmods : false 00:02:55.566 max_lcores : 128 00:02:55.566 tests : false 00:02:55.566 00:02:55.566 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:55.566 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:55.566 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:55.566 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:55.566 [3/268] Linking static target lib/librte_kvargs.a 00:02:55.566 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:55.566 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:55.566 [6/268] Linking static target lib/librte_log.a 00:02:55.566 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.566 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:55.566 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.566 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.566 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.566 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:55.566 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.566 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.566 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:55.566 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.566 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:55.566 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.566 [19/268] Linking static target lib/librte_telemetry.a 00:02:55.566 [20/268] Linking target lib/librte_log.so.24.1 00:02:55.825 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:55.825 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:55.825 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:55.825 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.084 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:56.084 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.084 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.084 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.084 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.343 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.343 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:56.343 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.343 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:56.343 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:56.343 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:56.604 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:56.604 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:56.885 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.885 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:56.885 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:56.885 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:57.160 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:57.160 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.160 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.160 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:57.160 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.160 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.419 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:57.419 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.419 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:57.679 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:57.679 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:57.938 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:57.938 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:57.938 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.197 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.197 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.197 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.197 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.456 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.456 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.456 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.714 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.714 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.973 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.973 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.973 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:59.231 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:59.231 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:59.231 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:59.231 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:59.231 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:59.490 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.490 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.490 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.490 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:59.490 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:59.490 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:59.748 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.748 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:59.748 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.748 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:00.007 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:00.007 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:00.007 [85/268] Linking static target lib/librte_ring.a 00:03:00.007 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:00.265 [87/268] Linking static target lib/librte_eal.a 00:03:00.265 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:00.265 [89/268] Linking static target lib/librte_rcu.a 00:03:00.265 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.265 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:00.523 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.523 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.523 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.523 [95/268] Linking static target lib/librte_mempool.a 00:03:00.523 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.523 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.782 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.782 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.040 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:01.040 [101/268] Linking static target lib/librte_mbuf.a 00:03:01.040 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:01.040 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:01.040 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:01.040 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:01.299 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:01.299 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:01.299 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:01.558 [109/268] Linking static target lib/librte_net.a 00:03:01.817 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:01.817 [111/268] Linking static target lib/librte_meter.a 00:03:01.817 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:01.817 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:01.817 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.817 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:01.817 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:01.817 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.076 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.076 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.334 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.593 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.593 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:02.593 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.852 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:02.852 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.852 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.852 [127/268] Linking static target lib/librte_pci.a 00:03:03.110 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.110 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.110 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:03.369 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.369 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.369 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:03.369 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:03.369 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:03.369 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:03.369 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.369 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:03.369 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.369 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.370 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.370 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.370 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.370 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.628 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:03.628 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.887 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.887 [148/268] Linking static target lib/librte_cmdline.a 00:03:04.146 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:04.146 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.146 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.405 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:04.405 [153/268] Linking static target lib/librte_ethdev.a 00:03:04.405 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.405 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.405 [156/268] Linking static target lib/librte_timer.a 00:03:04.405 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.405 [158/268] Linking static target lib/librte_hash.a 00:03:04.405 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.972 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.972 [161/268] Linking static target lib/librte_compressdev.a 00:03:04.972 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.972 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.972 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.972 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:05.231 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.231 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:05.490 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:05.490 [169/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.490 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.490 [171/268] Linking static target lib/librte_dmadev.a 00:03:05.490 [172/268] Linking static target lib/librte_cryptodev.a 00:03:05.749 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.749 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:05.749 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:05.749 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:05.749 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:05.749 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.316 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:06.316 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:06.316 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:06.316 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:06.316 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:06.574 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:06.574 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.574 [186/268] Linking static target lib/librte_reorder.a 00:03:06.574 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:06.574 [188/268] Linking static target lib/librte_power.a 00:03:07.142 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:07.142 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:07.142 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:07.142 [192/268] Linking static target lib/librte_security.a 00:03:07.142 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:07.142 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.709 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:07.967 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:07.967 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.967 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.967 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:08.225 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:08.225 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.484 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:08.484 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.484 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.743 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.743 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:08.743 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:09.001 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.001 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:09.001 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:09.260 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:09.260 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:09.260 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.260 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:09.260 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:09.260 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.260 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.260 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:09.518 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:09.518 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.518 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.518 [222/268] Linking static target drivers/librte_bus_vdev.a 00:03:09.518 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:09.518 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.518 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.518 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:09.777 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.035 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.601 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:10.601 [230/268] Linking static target lib/librte_vhost.a 00:03:11.167 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.424 [232/268] Linking target lib/librte_eal.so.24.1 00:03:11.424 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:11.424 [234/268] Linking target lib/librte_timer.so.24.1 00:03:11.424 [235/268] Linking target lib/librte_ring.so.24.1 00:03:11.424 [236/268] Linking target lib/librte_meter.so.24.1 00:03:11.424 [237/268] Linking target lib/librte_pci.so.24.1 00:03:11.424 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:11.424 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:11.683 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:11.683 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:11.683 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:11.683 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:11.683 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:11.683 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:11.683 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:11.683 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:11.941 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:11.941 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:11.941 [250/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.941 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:11.941 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:11.941 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.941 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:11.941 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:11.941 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:11.941 [257/268] Linking target lib/librte_net.so.24.1 00:03:11.941 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:12.199 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:12.199 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:12.199 [261/268] Linking target lib/librte_security.so.24.1 00:03:12.200 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:12.200 [263/268] Linking target lib/librte_hash.so.24.1 00:03:12.200 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:12.457 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:12.457 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:12.457 [267/268] Linking target lib/librte_power.so.24.1 00:03:12.457 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:12.457 INFO: autodetecting backend as ninja 00:03:12.457 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:38.997 CC lib/ut_mock/mock.o 00:03:38.997 CC lib/ut/ut.o 00:03:38.997 CC lib/log/log_flags.o 00:03:38.997 CC lib/log/log.o 00:03:38.997 CC lib/log/log_deprecated.o 00:03:38.997 LIB libspdk_ut.a 00:03:38.997 LIB libspdk_ut_mock.a 00:03:38.997 LIB libspdk_log.a 00:03:38.997 SO libspdk_ut.so.2.0 00:03:38.997 SO libspdk_ut_mock.so.6.0 00:03:38.997 SO libspdk_log.so.7.1 00:03:38.997 SYMLINK libspdk_ut_mock.so 00:03:38.997 SYMLINK libspdk_ut.so 00:03:38.997 SYMLINK libspdk_log.so 00:03:38.997 CC lib/util/base64.o 00:03:38.997 CC lib/dma/dma.o 00:03:38.997 CC lib/util/cpuset.o 00:03:38.997 CC lib/util/bit_array.o 00:03:38.997 CC lib/ioat/ioat.o 00:03:38.997 CC lib/util/crc32.o 00:03:38.997 CC lib/util/crc16.o 00:03:38.997 CXX lib/trace_parser/trace.o 00:03:38.997 CC lib/util/crc32c.o 00:03:38.997 CC lib/vfio_user/host/vfio_user_pci.o 00:03:38.997 CC lib/vfio_user/host/vfio_user.o 00:03:38.997 CC lib/util/crc32_ieee.o 00:03:38.997 CC lib/util/crc64.o 00:03:38.997 CC lib/util/dif.o 00:03:38.997 LIB libspdk_dma.a 00:03:38.997 CC lib/util/fd.o 00:03:38.997 SO libspdk_dma.so.5.0 00:03:38.997 CC lib/util/fd_group.o 00:03:38.997 LIB libspdk_ioat.a 00:03:38.997 SYMLINK libspdk_dma.so 00:03:38.997 CC lib/util/file.o 00:03:38.997 CC lib/util/hexlify.o 00:03:38.997 SO libspdk_ioat.so.7.0 00:03:38.997 CC lib/util/iov.o 00:03:38.997 CC lib/util/math.o 00:03:38.997 SYMLINK libspdk_ioat.so 00:03:38.997 CC lib/util/net.o 00:03:38.997 LIB libspdk_vfio_user.a 00:03:38.997 CC lib/util/pipe.o 00:03:38.997 SO libspdk_vfio_user.so.5.0 00:03:38.997 CC lib/util/strerror_tls.o 00:03:38.997 CC lib/util/string.o 00:03:38.997 SYMLINK libspdk_vfio_user.so 00:03:38.997 CC lib/util/uuid.o 00:03:38.997 CC lib/util/xor.o 00:03:38.997 CC lib/util/zipf.o 00:03:38.997 CC lib/util/md5.o 00:03:38.997 LIB libspdk_util.a 00:03:38.997 SO libspdk_util.so.10.1 00:03:38.997 LIB libspdk_trace_parser.a 00:03:38.997 SO libspdk_trace_parser.so.6.0 00:03:38.997 SYMLINK libspdk_util.so 00:03:38.997 SYMLINK libspdk_trace_parser.so 00:03:38.997 CC lib/vmd/vmd.o 00:03:38.997 CC lib/vmd/led.o 00:03:38.997 CC lib/rdma_utils/rdma_utils.o 00:03:38.997 CC lib/idxd/idxd.o 00:03:38.997 CC lib/idxd/idxd_user.o 00:03:38.997 CC lib/idxd/idxd_kernel.o 00:03:38.997 CC lib/json/json_parse.o 00:03:38.997 CC lib/json/json_util.o 00:03:38.997 CC lib/env_dpdk/env.o 00:03:38.997 CC lib/conf/conf.o 00:03:38.997 CC lib/env_dpdk/memory.o 00:03:38.997 CC lib/env_dpdk/pci.o 00:03:39.255 CC lib/env_dpdk/init.o 00:03:39.255 LIB libspdk_conf.a 00:03:39.255 SO libspdk_conf.so.6.0 00:03:39.255 CC lib/env_dpdk/threads.o 00:03:39.255 CC lib/json/json_write.o 00:03:39.255 LIB libspdk_rdma_utils.a 00:03:39.255 SYMLINK libspdk_conf.so 00:03:39.255 CC lib/env_dpdk/pci_ioat.o 00:03:39.255 SO libspdk_rdma_utils.so.1.0 00:03:39.255 SYMLINK libspdk_rdma_utils.so 00:03:39.255 CC lib/env_dpdk/pci_virtio.o 00:03:39.255 CC lib/env_dpdk/pci_vmd.o 00:03:39.512 CC lib/env_dpdk/pci_idxd.o 00:03:39.512 LIB libspdk_idxd.a 00:03:39.512 CC lib/env_dpdk/pci_event.o 00:03:39.512 CC lib/env_dpdk/sigbus_handler.o 00:03:39.512 CC lib/env_dpdk/pci_dpdk.o 00:03:39.512 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:39.513 SO libspdk_idxd.so.12.1 00:03:39.513 LIB libspdk_json.a 00:03:39.513 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:39.513 SO libspdk_json.so.6.0 00:03:39.513 SYMLINK libspdk_idxd.so 00:03:39.513 LIB libspdk_vmd.a 00:03:39.513 SYMLINK libspdk_json.so 00:03:39.771 SO libspdk_vmd.so.6.0 00:03:39.771 SYMLINK libspdk_vmd.so 00:03:39.771 CC lib/rdma_provider/common.o 00:03:39.771 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:39.771 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:39.771 CC lib/jsonrpc/jsonrpc_server.o 00:03:39.771 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:39.771 CC lib/jsonrpc/jsonrpc_client.o 00:03:40.028 LIB libspdk_rdma_provider.a 00:03:40.028 SO libspdk_rdma_provider.so.7.0 00:03:40.028 SYMLINK libspdk_rdma_provider.so 00:03:40.286 LIB libspdk_jsonrpc.a 00:03:40.286 SO libspdk_jsonrpc.so.6.0 00:03:40.286 SYMLINK libspdk_jsonrpc.so 00:03:40.286 LIB libspdk_env_dpdk.a 00:03:40.545 SO libspdk_env_dpdk.so.15.1 00:03:40.545 SYMLINK libspdk_env_dpdk.so 00:03:40.545 CC lib/rpc/rpc.o 00:03:40.803 LIB libspdk_rpc.a 00:03:40.803 SO libspdk_rpc.so.6.0 00:03:41.064 SYMLINK libspdk_rpc.so 00:03:41.325 CC lib/notify/notify_rpc.o 00:03:41.325 CC lib/notify/notify.o 00:03:41.325 CC lib/trace/trace_flags.o 00:03:41.325 CC lib/trace/trace.o 00:03:41.325 CC lib/keyring/keyring.o 00:03:41.325 CC lib/trace/trace_rpc.o 00:03:41.325 CC lib/keyring/keyring_rpc.o 00:03:41.325 LIB libspdk_notify.a 00:03:41.584 SO libspdk_notify.so.6.0 00:03:41.584 LIB libspdk_keyring.a 00:03:41.584 LIB libspdk_trace.a 00:03:41.584 SO libspdk_keyring.so.2.0 00:03:41.584 SYMLINK libspdk_notify.so 00:03:41.584 SO libspdk_trace.so.11.0 00:03:41.584 SYMLINK libspdk_keyring.so 00:03:41.584 SYMLINK libspdk_trace.so 00:03:41.841 CC lib/thread/iobuf.o 00:03:41.841 CC lib/thread/thread.o 00:03:41.841 CC lib/sock/sock.o 00:03:41.841 CC lib/sock/sock_rpc.o 00:03:42.408 LIB libspdk_sock.a 00:03:42.408 SO libspdk_sock.so.10.0 00:03:42.666 SYMLINK libspdk_sock.so 00:03:42.925 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:42.925 CC lib/nvme/nvme_ctrlr.o 00:03:42.925 CC lib/nvme/nvme_ns_cmd.o 00:03:42.925 CC lib/nvme/nvme_fabric.o 00:03:42.925 CC lib/nvme/nvme_ns.o 00:03:42.925 CC lib/nvme/nvme_pcie_common.o 00:03:42.925 CC lib/nvme/nvme_qpair.o 00:03:42.925 CC lib/nvme/nvme_pcie.o 00:03:42.925 CC lib/nvme/nvme.o 00:03:43.491 LIB libspdk_thread.a 00:03:43.750 SO libspdk_thread.so.11.0 00:03:43.750 CC lib/nvme/nvme_quirks.o 00:03:43.750 CC lib/nvme/nvme_transport.o 00:03:43.750 SYMLINK libspdk_thread.so 00:03:43.750 CC lib/nvme/nvme_discovery.o 00:03:43.750 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:43.750 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:43.750 CC lib/nvme/nvme_tcp.o 00:03:44.008 CC lib/nvme/nvme_opal.o 00:03:44.008 CC lib/nvme/nvme_io_msg.o 00:03:44.267 CC lib/accel/accel.o 00:03:44.267 CC lib/nvme/nvme_poll_group.o 00:03:44.525 CC lib/accel/accel_rpc.o 00:03:44.525 CC lib/accel/accel_sw.o 00:03:44.525 CC lib/nvme/nvme_zns.o 00:03:44.525 CC lib/nvme/nvme_stubs.o 00:03:44.525 CC lib/nvme/nvme_auth.o 00:03:44.784 CC lib/nvme/nvme_cuse.o 00:03:44.784 CC lib/nvme/nvme_rdma.o 00:03:44.784 CC lib/blob/blobstore.o 00:03:45.043 CC lib/blob/request.o 00:03:45.043 CC lib/blob/zeroes.o 00:03:45.302 CC lib/blob/blob_bs_dev.o 00:03:45.302 LIB libspdk_accel.a 00:03:45.302 CC lib/init/json_config.o 00:03:45.560 SO libspdk_accel.so.16.0 00:03:45.560 CC lib/init/subsystem.o 00:03:45.560 CC lib/init/subsystem_rpc.o 00:03:45.560 CC lib/virtio/virtio.o 00:03:45.560 SYMLINK libspdk_accel.so 00:03:45.560 CC lib/virtio/virtio_vhost_user.o 00:03:45.560 CC lib/virtio/virtio_vfio_user.o 00:03:45.560 CC lib/virtio/virtio_pci.o 00:03:45.560 CC lib/fsdev/fsdev.o 00:03:45.560 CC lib/init/rpc.o 00:03:45.560 CC lib/fsdev/fsdev_io.o 00:03:45.819 CC lib/fsdev/fsdev_rpc.o 00:03:45.819 LIB libspdk_init.a 00:03:45.819 CC lib/bdev/bdev.o 00:03:45.819 CC lib/bdev/bdev_rpc.o 00:03:45.819 CC lib/bdev/bdev_zone.o 00:03:45.819 SO libspdk_init.so.6.0 00:03:45.819 LIB libspdk_virtio.a 00:03:46.078 SYMLINK libspdk_init.so 00:03:46.078 CC lib/bdev/part.o 00:03:46.078 CC lib/bdev/scsi_nvme.o 00:03:46.078 SO libspdk_virtio.so.7.0 00:03:46.078 SYMLINK libspdk_virtio.so 00:03:46.337 CC lib/event/app.o 00:03:46.337 CC lib/event/reactor.o 00:03:46.337 CC lib/event/app_rpc.o 00:03:46.337 CC lib/event/scheduler_static.o 00:03:46.337 CC lib/event/log_rpc.o 00:03:46.337 LIB libspdk_fsdev.a 00:03:46.337 LIB libspdk_nvme.a 00:03:46.337 SO libspdk_fsdev.so.2.0 00:03:46.337 SYMLINK libspdk_fsdev.so 00:03:46.596 SO libspdk_nvme.so.15.0 00:03:46.596 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:46.855 LIB libspdk_event.a 00:03:46.855 SO libspdk_event.so.14.0 00:03:46.855 SYMLINK libspdk_nvme.so 00:03:46.855 SYMLINK libspdk_event.so 00:03:47.423 LIB libspdk_fuse_dispatcher.a 00:03:47.423 SO libspdk_fuse_dispatcher.so.1.0 00:03:47.423 SYMLINK libspdk_fuse_dispatcher.so 00:03:47.990 LIB libspdk_blob.a 00:03:48.249 SO libspdk_blob.so.12.0 00:03:48.249 SYMLINK libspdk_blob.so 00:03:48.508 CC lib/lvol/lvol.o 00:03:48.508 CC lib/blobfs/blobfs.o 00:03:48.508 CC lib/blobfs/tree.o 00:03:48.766 LIB libspdk_bdev.a 00:03:48.766 SO libspdk_bdev.so.17.0 00:03:48.766 SYMLINK libspdk_bdev.so 00:03:49.024 CC lib/nvmf/ctrlr.o 00:03:49.024 CC lib/nvmf/ctrlr_discovery.o 00:03:49.024 CC lib/nvmf/ctrlr_bdev.o 00:03:49.024 CC lib/nvmf/subsystem.o 00:03:49.024 CC lib/ftl/ftl_core.o 00:03:49.024 CC lib/scsi/dev.o 00:03:49.024 CC lib/ublk/ublk.o 00:03:49.024 CC lib/nbd/nbd.o 00:03:49.592 LIB libspdk_blobfs.a 00:03:49.592 CC lib/scsi/lun.o 00:03:49.592 SO libspdk_blobfs.so.11.0 00:03:49.592 LIB libspdk_lvol.a 00:03:49.592 SO libspdk_lvol.so.11.0 00:03:49.592 SYMLINK libspdk_blobfs.so 00:03:49.592 CC lib/scsi/port.o 00:03:49.592 CC lib/ftl/ftl_init.o 00:03:49.592 SYMLINK libspdk_lvol.so 00:03:49.592 CC lib/ftl/ftl_layout.o 00:03:49.592 CC lib/nbd/nbd_rpc.o 00:03:49.592 CC lib/ftl/ftl_debug.o 00:03:49.592 CC lib/ftl/ftl_io.o 00:03:49.853 CC lib/scsi/scsi.o 00:03:49.853 CC lib/ublk/ublk_rpc.o 00:03:49.853 CC lib/scsi/scsi_bdev.o 00:03:49.853 LIB libspdk_nbd.a 00:03:49.853 CC lib/scsi/scsi_pr.o 00:03:49.853 SO libspdk_nbd.so.7.0 00:03:49.853 CC lib/ftl/ftl_sb.o 00:03:49.853 SYMLINK libspdk_nbd.so 00:03:49.853 CC lib/scsi/scsi_rpc.o 00:03:49.853 CC lib/scsi/task.o 00:03:49.853 CC lib/nvmf/nvmf.o 00:03:49.853 LIB libspdk_ublk.a 00:03:50.113 CC lib/nvmf/nvmf_rpc.o 00:03:50.113 SO libspdk_ublk.so.3.0 00:03:50.113 SYMLINK libspdk_ublk.so 00:03:50.113 CC lib/ftl/ftl_l2p.o 00:03:50.113 CC lib/ftl/ftl_l2p_flat.o 00:03:50.113 CC lib/nvmf/transport.o 00:03:50.113 CC lib/nvmf/tcp.o 00:03:50.113 CC lib/nvmf/stubs.o 00:03:50.371 LIB libspdk_scsi.a 00:03:50.371 CC lib/ftl/ftl_nv_cache.o 00:03:50.371 CC lib/nvmf/mdns_server.o 00:03:50.371 SO libspdk_scsi.so.9.0 00:03:50.371 CC lib/nvmf/rdma.o 00:03:50.371 SYMLINK libspdk_scsi.so 00:03:50.371 CC lib/ftl/ftl_band.o 00:03:50.629 CC lib/nvmf/auth.o 00:03:50.888 CC lib/ftl/ftl_band_ops.o 00:03:50.888 CC lib/ftl/ftl_writer.o 00:03:50.888 CC lib/ftl/ftl_rq.o 00:03:50.888 CC lib/ftl/ftl_reloc.o 00:03:50.888 CC lib/ftl/ftl_l2p_cache.o 00:03:50.888 CC lib/ftl/ftl_p2l.o 00:03:51.147 CC lib/ftl/ftl_p2l_log.o 00:03:51.147 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.147 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.147 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.405 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.405 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.405 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.405 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.405 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.405 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.405 CC lib/iscsi/conn.o 00:03:51.405 CC lib/iscsi/init_grp.o 00:03:51.663 CC lib/vhost/vhost.o 00:03:51.663 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.663 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.663 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.663 CC lib/vhost/vhost_rpc.o 00:03:51.663 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.663 CC lib/iscsi/iscsi.o 00:03:51.922 CC lib/iscsi/param.o 00:03:51.922 CC lib/iscsi/portal_grp.o 00:03:51.922 CC lib/iscsi/tgt_node.o 00:03:52.181 CC lib/iscsi/iscsi_subsystem.o 00:03:52.181 CC lib/iscsi/iscsi_rpc.o 00:03:52.181 CC lib/ftl/utils/ftl_conf.o 00:03:52.181 CC lib/ftl/utils/ftl_md.o 00:03:52.181 CC lib/iscsi/task.o 00:03:52.440 CC lib/vhost/vhost_scsi.o 00:03:52.440 CC lib/vhost/vhost_blk.o 00:03:52.440 CC lib/vhost/rte_vhost_user.o 00:03:52.440 CC lib/ftl/utils/ftl_mempool.o 00:03:52.440 CC lib/ftl/utils/ftl_property.o 00:03:52.440 CC lib/ftl/utils/ftl_bitmap.o 00:03:52.440 LIB libspdk_nvmf.a 00:03:52.699 SO libspdk_nvmf.so.20.0 00:03:52.699 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:52.699 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:52.699 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:52.699 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:52.699 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:52.957 SYMLINK libspdk_nvmf.so 00:03:52.957 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:52.957 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:52.957 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:52.957 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:52.957 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:52.957 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:52.957 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:53.215 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:53.215 CC lib/ftl/base/ftl_base_dev.o 00:03:53.215 LIB libspdk_iscsi.a 00:03:53.215 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.215 CC lib/ftl/ftl_trace.o 00:03:53.215 SO libspdk_iscsi.so.8.0 00:03:53.473 SYMLINK libspdk_iscsi.so 00:03:53.473 LIB libspdk_vhost.a 00:03:53.473 LIB libspdk_ftl.a 00:03:53.731 SO libspdk_vhost.so.8.0 00:03:53.731 SYMLINK libspdk_vhost.so 00:03:53.731 SO libspdk_ftl.so.9.0 00:03:54.297 SYMLINK libspdk_ftl.so 00:03:54.556 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.556 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.556 CC module/keyring/linux/keyring.o 00:03:54.556 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.556 CC module/keyring/file/keyring.o 00:03:54.556 CC module/blob/bdev/blob_bdev.o 00:03:54.556 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.556 CC module/fsdev/aio/fsdev_aio.o 00:03:54.556 CC module/accel/error/accel_error.o 00:03:54.556 CC module/sock/posix/posix.o 00:03:54.556 LIB libspdk_env_dpdk_rpc.a 00:03:54.556 SO libspdk_env_dpdk_rpc.so.6.0 00:03:54.814 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.814 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:54.814 CC module/keyring/linux/keyring_rpc.o 00:03:54.814 CC module/keyring/file/keyring_rpc.o 00:03:54.814 LIB libspdk_scheduler_gscheduler.a 00:03:54.814 LIB libspdk_scheduler_dpdk_governor.a 00:03:54.814 SO libspdk_scheduler_gscheduler.so.4.0 00:03:54.814 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:54.814 CC module/accel/error/accel_error_rpc.o 00:03:54.814 LIB libspdk_scheduler_dynamic.a 00:03:54.814 SO libspdk_scheduler_dynamic.so.4.0 00:03:54.814 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.814 LIB libspdk_keyring_linux.a 00:03:54.814 LIB libspdk_blob_bdev.a 00:03:54.814 LIB libspdk_keyring_file.a 00:03:54.814 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.814 SO libspdk_keyring_linux.so.1.0 00:03:54.814 SO libspdk_blob_bdev.so.12.0 00:03:54.814 SO libspdk_keyring_file.so.2.0 00:03:55.073 SYMLINK libspdk_scheduler_dynamic.so 00:03:55.073 LIB libspdk_accel_error.a 00:03:55.073 SYMLINK libspdk_blob_bdev.so 00:03:55.073 SYMLINK libspdk_keyring_linux.so 00:03:55.073 SYMLINK libspdk_keyring_file.so 00:03:55.073 CC module/fsdev/aio/linux_aio_mgr.o 00:03:55.073 SO libspdk_accel_error.so.2.0 00:03:55.073 CC module/accel/ioat/accel_ioat.o 00:03:55.073 CC module/accel/ioat/accel_ioat_rpc.o 00:03:55.073 SYMLINK libspdk_accel_error.so 00:03:55.073 CC module/accel/dsa/accel_dsa.o 00:03:55.073 CC module/accel/dsa/accel_dsa_rpc.o 00:03:55.073 CC module/accel/iaa/accel_iaa.o 00:03:55.350 CC module/accel/iaa/accel_iaa_rpc.o 00:03:55.350 LIB libspdk_fsdev_aio.a 00:03:55.350 LIB libspdk_accel_ioat.a 00:03:55.350 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.350 SO libspdk_fsdev_aio.so.1.0 00:03:55.350 CC module/bdev/delay/vbdev_delay.o 00:03:55.350 SO libspdk_accel_ioat.so.6.0 00:03:55.350 LIB libspdk_sock_posix.a 00:03:55.350 LIB libspdk_accel_iaa.a 00:03:55.350 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:55.350 SYMLINK libspdk_fsdev_aio.so 00:03:55.350 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.350 SO libspdk_accel_iaa.so.3.0 00:03:55.350 SO libspdk_sock_posix.so.6.0 00:03:55.350 SYMLINK libspdk_accel_ioat.so 00:03:55.350 LIB libspdk_accel_dsa.a 00:03:55.350 CC module/bdev/error/vbdev_error.o 00:03:55.350 SYMLINK libspdk_accel_iaa.so 00:03:55.350 SO libspdk_accel_dsa.so.5.0 00:03:55.619 SYMLINK libspdk_sock_posix.so 00:03:55.619 CC module/bdev/gpt/gpt.o 00:03:55.619 CC module/bdev/gpt/vbdev_gpt.o 00:03:55.619 SYMLINK libspdk_accel_dsa.so 00:03:55.619 CC module/bdev/error/vbdev_error_rpc.o 00:03:55.619 LIB libspdk_blobfs_bdev.a 00:03:55.619 SO libspdk_blobfs_bdev.so.6.0 00:03:55.619 CC module/bdev/lvol/vbdev_lvol.o 00:03:55.619 CC module/bdev/malloc/bdev_malloc.o 00:03:55.619 SYMLINK libspdk_blobfs_bdev.so 00:03:55.619 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:55.619 CC module/bdev/null/bdev_null.o 00:03:55.619 CC module/bdev/null/bdev_null_rpc.o 00:03:55.619 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.619 LIB libspdk_bdev_error.a 00:03:55.619 LIB libspdk_bdev_delay.a 00:03:55.619 CC module/bdev/nvme/bdev_nvme.o 00:03:55.877 SO libspdk_bdev_error.so.6.0 00:03:55.877 SO libspdk_bdev_delay.so.6.0 00:03:55.877 LIB libspdk_bdev_gpt.a 00:03:55.877 SO libspdk_bdev_gpt.so.6.0 00:03:55.877 SYMLINK libspdk_bdev_error.so 00:03:55.877 SYMLINK libspdk_bdev_delay.so 00:03:55.877 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.877 SYMLINK libspdk_bdev_gpt.so 00:03:55.877 LIB libspdk_bdev_null.a 00:03:56.137 SO libspdk_bdev_null.so.6.0 00:03:56.137 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.137 LIB libspdk_bdev_malloc.a 00:03:56.137 SYMLINK libspdk_bdev_null.so 00:03:56.137 SO libspdk_bdev_malloc.so.6.0 00:03:56.137 CC module/bdev/split/vbdev_split.o 00:03:56.137 CC module/bdev/raid/bdev_raid.o 00:03:56.137 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.137 LIB libspdk_bdev_lvol.a 00:03:56.137 SYMLINK libspdk_bdev_malloc.so 00:03:56.137 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.137 SO libspdk_bdev_lvol.so.6.0 00:03:56.395 CC module/bdev/aio/bdev_aio.o 00:03:56.395 CC module/bdev/ftl/bdev_ftl.o 00:03:56.395 SYMLINK libspdk_bdev_lvol.so 00:03:56.395 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.395 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.395 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.395 CC module/bdev/nvme/nvme_rpc.o 00:03:56.395 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.653 LIB libspdk_bdev_passthru.a 00:03:56.653 LIB libspdk_bdev_split.a 00:03:56.653 SO libspdk_bdev_passthru.so.6.0 00:03:56.653 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.653 SO libspdk_bdev_split.so.6.0 00:03:56.653 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.653 SYMLINK libspdk_bdev_passthru.so 00:03:56.653 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.653 LIB libspdk_bdev_aio.a 00:03:56.653 LIB libspdk_bdev_zone_block.a 00:03:56.653 SYMLINK libspdk_bdev_split.so 00:03:56.653 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.653 SO libspdk_bdev_aio.so.6.0 00:03:56.653 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.653 SO libspdk_bdev_zone_block.so.6.0 00:03:56.653 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.912 SYMLINK libspdk_bdev_aio.so 00:03:56.912 SYMLINK libspdk_bdev_zone_block.so 00:03:56.912 CC module/bdev/raid/raid0.o 00:03:56.912 CC module/bdev/raid/raid1.o 00:03:56.912 CC module/bdev/raid/concat.o 00:03:56.912 LIB libspdk_bdev_ftl.a 00:03:56.912 SO libspdk_bdev_ftl.so.6.0 00:03:56.912 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.912 LIB libspdk_bdev_iscsi.a 00:03:56.912 SYMLINK libspdk_bdev_ftl.so 00:03:57.171 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:57.171 SO libspdk_bdev_iscsi.so.6.0 00:03:57.171 CC module/bdev/nvme/vbdev_opal.o 00:03:57.171 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:57.171 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:57.171 SYMLINK libspdk_bdev_iscsi.so 00:03:57.171 LIB libspdk_bdev_raid.a 00:03:57.171 SO libspdk_bdev_raid.so.6.0 00:03:57.430 LIB libspdk_bdev_virtio.a 00:03:57.430 SYMLINK libspdk_bdev_raid.so 00:03:57.430 SO libspdk_bdev_virtio.so.6.0 00:03:57.430 SYMLINK libspdk_bdev_virtio.so 00:03:58.806 LIB libspdk_bdev_nvme.a 00:03:58.806 SO libspdk_bdev_nvme.so.7.1 00:03:58.806 SYMLINK libspdk_bdev_nvme.so 00:03:59.373 CC module/event/subsystems/sock/sock.o 00:03:59.373 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:59.374 CC module/event/subsystems/scheduler/scheduler.o 00:03:59.374 CC module/event/subsystems/vmd/vmd.o 00:03:59.374 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:59.374 CC module/event/subsystems/iobuf/iobuf.o 00:03:59.374 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.374 CC module/event/subsystems/fsdev/fsdev.o 00:03:59.374 CC module/event/subsystems/keyring/keyring.o 00:03:59.374 LIB libspdk_event_scheduler.a 00:03:59.374 LIB libspdk_event_vmd.a 00:03:59.374 LIB libspdk_event_sock.a 00:03:59.374 LIB libspdk_event_vhost_blk.a 00:03:59.374 LIB libspdk_event_fsdev.a 00:03:59.374 SO libspdk_event_scheduler.so.4.0 00:03:59.374 SO libspdk_event_sock.so.5.0 00:03:59.374 SO libspdk_event_vhost_blk.so.3.0 00:03:59.374 SO libspdk_event_vmd.so.6.0 00:03:59.374 SO libspdk_event_fsdev.so.1.0 00:03:59.374 LIB libspdk_event_iobuf.a 00:03:59.374 LIB libspdk_event_keyring.a 00:03:59.632 SO libspdk_event_keyring.so.1.0 00:03:59.632 SO libspdk_event_iobuf.so.3.0 00:03:59.632 SYMLINK libspdk_event_scheduler.so 00:03:59.632 SYMLINK libspdk_event_sock.so 00:03:59.632 SYMLINK libspdk_event_vhost_blk.so 00:03:59.633 SYMLINK libspdk_event_vmd.so 00:03:59.633 SYMLINK libspdk_event_fsdev.so 00:03:59.633 SYMLINK libspdk_event_keyring.so 00:03:59.633 SYMLINK libspdk_event_iobuf.so 00:03:59.891 CC module/event/subsystems/accel/accel.o 00:04:00.149 LIB libspdk_event_accel.a 00:04:00.149 SO libspdk_event_accel.so.6.0 00:04:00.149 SYMLINK libspdk_event_accel.so 00:04:00.408 CC module/event/subsystems/bdev/bdev.o 00:04:00.667 LIB libspdk_event_bdev.a 00:04:00.667 SO libspdk_event_bdev.so.6.0 00:04:00.925 SYMLINK libspdk_event_bdev.so 00:04:00.925 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:00.925 CC module/event/subsystems/ublk/ublk.o 00:04:00.925 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:00.925 CC module/event/subsystems/scsi/scsi.o 00:04:00.925 CC module/event/subsystems/nbd/nbd.o 00:04:01.184 LIB libspdk_event_ublk.a 00:04:01.184 LIB libspdk_event_nbd.a 00:04:01.184 LIB libspdk_event_scsi.a 00:04:01.184 SO libspdk_event_ublk.so.3.0 00:04:01.184 SO libspdk_event_nbd.so.6.0 00:04:01.184 SO libspdk_event_scsi.so.6.0 00:04:01.442 SYMLINK libspdk_event_nbd.so 00:04:01.442 SYMLINK libspdk_event_ublk.so 00:04:01.442 SYMLINK libspdk_event_scsi.so 00:04:01.442 LIB libspdk_event_nvmf.a 00:04:01.442 SO libspdk_event_nvmf.so.6.0 00:04:01.442 SYMLINK libspdk_event_nvmf.so 00:04:01.701 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.701 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.701 LIB libspdk_event_vhost_scsi.a 00:04:01.701 LIB libspdk_event_iscsi.a 00:04:01.701 SO libspdk_event_vhost_scsi.so.3.0 00:04:01.960 SO libspdk_event_iscsi.so.6.0 00:04:01.960 SYMLINK libspdk_event_vhost_scsi.so 00:04:01.960 SYMLINK libspdk_event_iscsi.so 00:04:01.960 SO libspdk.so.6.0 00:04:02.218 SYMLINK libspdk.so 00:04:02.477 CXX app/trace/trace.o 00:04:02.477 CC app/trace_record/trace_record.o 00:04:02.477 CC test/rpc_client/rpc_client_test.o 00:04:02.477 TEST_HEADER include/spdk/accel.h 00:04:02.477 TEST_HEADER include/spdk/accel_module.h 00:04:02.477 TEST_HEADER include/spdk/assert.h 00:04:02.477 TEST_HEADER include/spdk/barrier.h 00:04:02.477 TEST_HEADER include/spdk/base64.h 00:04:02.477 TEST_HEADER include/spdk/bdev.h 00:04:02.477 TEST_HEADER include/spdk/bdev_module.h 00:04:02.477 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.477 TEST_HEADER include/spdk/bit_array.h 00:04:02.477 TEST_HEADER include/spdk/bit_pool.h 00:04:02.477 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.477 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.477 TEST_HEADER include/spdk/blobfs.h 00:04:02.477 TEST_HEADER include/spdk/blob.h 00:04:02.477 TEST_HEADER include/spdk/conf.h 00:04:02.477 TEST_HEADER include/spdk/config.h 00:04:02.477 TEST_HEADER include/spdk/cpuset.h 00:04:02.477 TEST_HEADER include/spdk/crc16.h 00:04:02.477 TEST_HEADER include/spdk/crc32.h 00:04:02.477 TEST_HEADER include/spdk/crc64.h 00:04:02.477 TEST_HEADER include/spdk/dif.h 00:04:02.477 TEST_HEADER include/spdk/dma.h 00:04:02.477 TEST_HEADER include/spdk/endian.h 00:04:02.477 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.477 TEST_HEADER include/spdk/env.h 00:04:02.477 TEST_HEADER include/spdk/event.h 00:04:02.477 TEST_HEADER include/spdk/fd_group.h 00:04:02.477 TEST_HEADER include/spdk/fd.h 00:04:02.478 TEST_HEADER include/spdk/file.h 00:04:02.478 TEST_HEADER include/spdk/fsdev.h 00:04:02.478 TEST_HEADER include/spdk/fsdev_module.h 00:04:02.478 TEST_HEADER include/spdk/ftl.h 00:04:02.478 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.478 TEST_HEADER include/spdk/hexlify.h 00:04:02.478 TEST_HEADER include/spdk/histogram_data.h 00:04:02.478 TEST_HEADER include/spdk/idxd.h 00:04:02.478 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.478 CC test/thread/poller_perf/poller_perf.o 00:04:02.478 CC examples/ioat/perf/perf.o 00:04:02.478 TEST_HEADER include/spdk/init.h 00:04:02.478 TEST_HEADER include/spdk/ioat.h 00:04:02.478 CC examples/util/zipf/zipf.o 00:04:02.478 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.478 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.478 TEST_HEADER include/spdk/json.h 00:04:02.478 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.478 TEST_HEADER include/spdk/keyring.h 00:04:02.478 TEST_HEADER include/spdk/keyring_module.h 00:04:02.478 TEST_HEADER include/spdk/likely.h 00:04:02.478 TEST_HEADER include/spdk/log.h 00:04:02.478 TEST_HEADER include/spdk/lvol.h 00:04:02.478 CC test/app/bdev_svc/bdev_svc.o 00:04:02.478 TEST_HEADER include/spdk/md5.h 00:04:02.478 TEST_HEADER include/spdk/memory.h 00:04:02.478 TEST_HEADER include/spdk/mmio.h 00:04:02.478 TEST_HEADER include/spdk/nbd.h 00:04:02.478 TEST_HEADER include/spdk/net.h 00:04:02.478 TEST_HEADER include/spdk/notify.h 00:04:02.478 CC test/dma/test_dma/test_dma.o 00:04:02.478 TEST_HEADER include/spdk/nvme.h 00:04:02.478 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.478 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.478 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.478 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.478 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.478 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.737 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.737 TEST_HEADER include/spdk/nvmf.h 00:04:02.737 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.737 LINK rpc_client_test 00:04:02.737 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.737 TEST_HEADER include/spdk/opal.h 00:04:02.737 TEST_HEADER include/spdk/opal_spec.h 00:04:02.737 TEST_HEADER include/spdk/pci_ids.h 00:04:02.737 TEST_HEADER include/spdk/pipe.h 00:04:02.737 TEST_HEADER include/spdk/queue.h 00:04:02.737 CC test/env/mem_callbacks/mem_callbacks.o 00:04:02.737 TEST_HEADER include/spdk/reduce.h 00:04:02.737 TEST_HEADER include/spdk/rpc.h 00:04:02.737 TEST_HEADER include/spdk/scheduler.h 00:04:02.737 TEST_HEADER include/spdk/scsi.h 00:04:02.737 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.737 TEST_HEADER include/spdk/sock.h 00:04:02.737 TEST_HEADER include/spdk/stdinc.h 00:04:02.737 TEST_HEADER include/spdk/string.h 00:04:02.737 TEST_HEADER include/spdk/thread.h 00:04:02.737 TEST_HEADER include/spdk/trace.h 00:04:02.737 TEST_HEADER include/spdk/trace_parser.h 00:04:02.737 TEST_HEADER include/spdk/tree.h 00:04:02.737 TEST_HEADER include/spdk/ublk.h 00:04:02.737 TEST_HEADER include/spdk/util.h 00:04:02.737 TEST_HEADER include/spdk/uuid.h 00:04:02.737 TEST_HEADER include/spdk/version.h 00:04:02.737 LINK spdk_trace_record 00:04:02.737 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.737 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.737 LINK poller_perf 00:04:02.737 TEST_HEADER include/spdk/vhost.h 00:04:02.737 TEST_HEADER include/spdk/vmd.h 00:04:02.737 TEST_HEADER include/spdk/xor.h 00:04:02.737 TEST_HEADER include/spdk/zipf.h 00:04:02.737 CXX test/cpp_headers/accel.o 00:04:02.737 LINK zipf 00:04:02.737 LINK bdev_svc 00:04:02.737 LINK ioat_perf 00:04:02.737 LINK spdk_trace 00:04:02.995 CC test/env/vtophys/vtophys.o 00:04:02.995 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:02.995 CXX test/cpp_headers/accel_module.o 00:04:02.995 CC test/env/memory/memory_ut.o 00:04:02.995 CC test/env/pci/pci_ut.o 00:04:02.995 CC examples/ioat/verify/verify.o 00:04:02.995 LINK vtophys 00:04:03.254 LINK env_dpdk_post_init 00:04:03.254 CXX test/cpp_headers/assert.o 00:04:03.254 LINK test_dma 00:04:03.254 CC app/nvmf_tgt/nvmf_main.o 00:04:03.254 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.254 LINK mem_callbacks 00:04:03.254 LINK verify 00:04:03.254 CXX test/cpp_headers/barrier.o 00:04:03.254 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.512 CXX test/cpp_headers/base64.o 00:04:03.512 CXX test/cpp_headers/bdev.o 00:04:03.512 LINK nvmf_tgt 00:04:03.512 LINK pci_ut 00:04:03.512 CC app/iscsi_tgt/iscsi_tgt.o 00:04:03.512 CXX test/cpp_headers/bdev_module.o 00:04:03.772 CXX test/cpp_headers/bdev_zone.o 00:04:03.772 CC test/app/histogram_perf/histogram_perf.o 00:04:03.772 LINK nvme_fuzz 00:04:03.772 CC test/app/jsoncat/jsoncat.o 00:04:03.772 CC test/app/stub/stub.o 00:04:03.772 LINK iscsi_tgt 00:04:03.772 LINK histogram_perf 00:04:03.772 LINK jsoncat 00:04:03.772 CXX test/cpp_headers/bit_array.o 00:04:04.032 CXX test/cpp_headers/bit_pool.o 00:04:04.032 LINK stub 00:04:04.032 CC app/spdk_tgt/spdk_tgt.o 00:04:04.032 CXX test/cpp_headers/blob_bdev.o 00:04:04.032 CC test/event/event_perf/event_perf.o 00:04:04.032 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:04.290 CXX test/cpp_headers/blobfs_bdev.o 00:04:04.290 LINK event_perf 00:04:04.290 CC test/nvme/reset/reset.o 00:04:04.290 LINK spdk_tgt 00:04:04.290 CC test/nvme/aer/aer.o 00:04:04.290 CC app/spdk_lspci/spdk_lspci.o 00:04:04.290 LINK memory_ut 00:04:04.290 LINK interrupt_tgt 00:04:04.549 LINK spdk_lspci 00:04:04.549 CXX test/cpp_headers/blobfs.o 00:04:04.549 CC test/event/reactor/reactor.o 00:04:04.549 CXX test/cpp_headers/blob.o 00:04:04.549 CXX test/cpp_headers/conf.o 00:04:04.549 LINK reset 00:04:04.549 CC test/event/reactor_perf/reactor_perf.o 00:04:04.549 LINK aer 00:04:04.808 CXX test/cpp_headers/config.o 00:04:04.808 CC app/spdk_nvme_perf/perf.o 00:04:04.808 CC test/nvme/sgl/sgl.o 00:04:04.808 LINK reactor 00:04:04.808 CXX test/cpp_headers/cpuset.o 00:04:04.808 LINK reactor_perf 00:04:05.066 CC test/event/app_repeat/app_repeat.o 00:04:05.066 CXX test/cpp_headers/crc16.o 00:04:05.066 CC test/event/scheduler/scheduler.o 00:04:05.066 CC app/spdk_nvme_identify/identify.o 00:04:05.066 LINK sgl 00:04:05.066 LINK app_repeat 00:04:05.325 CXX test/cpp_headers/crc32.o 00:04:05.325 LINK iscsi_fuzz 00:04:05.325 CC test/accel/dif/dif.o 00:04:05.325 CC test/blobfs/mkfs/mkfs.o 00:04:05.325 LINK scheduler 00:04:05.325 CXX test/cpp_headers/crc64.o 00:04:05.584 CC test/nvme/e2edp/nvme_dp.o 00:04:05.584 LINK mkfs 00:04:05.584 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:05.584 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:05.584 CXX test/cpp_headers/dif.o 00:04:05.584 CC test/lvol/esnap/esnap.o 00:04:05.584 LINK spdk_nvme_perf 00:04:05.584 CXX test/cpp_headers/dma.o 00:04:05.842 LINK nvme_dp 00:04:05.842 CXX test/cpp_headers/endian.o 00:04:05.842 LINK spdk_nvme_identify 00:04:06.099 CXX test/cpp_headers/env_dpdk.o 00:04:06.099 LINK vhost_fuzz 00:04:06.099 LINK dif 00:04:06.099 CC test/nvme/overhead/overhead.o 00:04:06.099 CC app/spdk_nvme_discover/discovery_aer.o 00:04:06.099 CC examples/thread/thread/thread_ex.o 00:04:06.099 CC examples/sock/hello_world/hello_sock.o 00:04:06.099 CXX test/cpp_headers/env.o 00:04:06.357 CC test/nvme/err_injection/err_injection.o 00:04:06.357 LINK spdk_nvme_discover 00:04:06.357 CXX test/cpp_headers/event.o 00:04:06.357 CC test/nvme/startup/startup.o 00:04:06.357 LINK hello_sock 00:04:06.357 LINK overhead 00:04:06.357 CC test/nvme/reserve/reserve.o 00:04:06.615 LINK thread 00:04:06.615 CXX test/cpp_headers/fd_group.o 00:04:06.615 LINK err_injection 00:04:06.615 LINK startup 00:04:06.615 LINK reserve 00:04:06.615 CC app/spdk_top/spdk_top.o 00:04:06.874 CXX test/cpp_headers/fd.o 00:04:06.874 CC examples/vmd/lsvmd/lsvmd.o 00:04:06.874 CC examples/idxd/perf/perf.o 00:04:06.874 CC examples/vmd/led/led.o 00:04:06.874 CC test/nvme/simple_copy/simple_copy.o 00:04:06.874 CC app/vhost/vhost.o 00:04:06.874 CC app/spdk_dd/spdk_dd.o 00:04:06.874 LINK lsvmd 00:04:07.132 CXX test/cpp_headers/file.o 00:04:07.132 LINK led 00:04:07.132 LINK vhost 00:04:07.132 LINK idxd_perf 00:04:07.132 CXX test/cpp_headers/fsdev.o 00:04:07.391 LINK simple_copy 00:04:07.391 CXX test/cpp_headers/fsdev_module.o 00:04:07.391 CC app/fio/nvme/fio_plugin.o 00:04:07.391 LINK spdk_dd 00:04:07.649 CC examples/nvme/hello_world/hello_world.o 00:04:07.649 CXX test/cpp_headers/ftl.o 00:04:07.908 CC test/nvme/connect_stress/connect_stress.o 00:04:07.908 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:07.908 LINK spdk_top 00:04:07.908 CXX test/cpp_headers/gpt_spec.o 00:04:07.908 CC test/nvme/boot_partition/boot_partition.o 00:04:07.908 LINK hello_world 00:04:08.169 LINK connect_stress 00:04:08.169 CC test/nvme/compliance/nvme_compliance.o 00:04:08.169 CXX test/cpp_headers/hexlify.o 00:04:08.169 LINK spdk_nvme 00:04:08.169 LINK boot_partition 00:04:08.427 LINK hello_fsdev 00:04:08.427 CXX test/cpp_headers/histogram_data.o 00:04:08.427 CC examples/nvme/reconnect/reconnect.o 00:04:08.427 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:08.427 CC test/bdev/bdevio/bdevio.o 00:04:08.427 LINK nvme_compliance 00:04:08.685 CC app/fio/bdev/fio_plugin.o 00:04:08.686 CXX test/cpp_headers/idxd.o 00:04:08.686 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.686 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.686 CXX test/cpp_headers/idxd_spec.o 00:04:08.686 LINK reconnect 00:04:08.944 CXX test/cpp_headers/init.o 00:04:08.944 LINK doorbell_aers 00:04:08.944 LINK fused_ordering 00:04:08.944 LINK bdevio 00:04:08.944 LINK nvme_manage 00:04:09.202 CXX test/cpp_headers/ioat.o 00:04:09.202 CC examples/nvme/arbitration/arbitration.o 00:04:09.202 CC examples/accel/perf/accel_perf.o 00:04:09.202 LINK spdk_bdev 00:04:09.202 CC examples/nvme/hotplug/hotplug.o 00:04:09.202 CC test/nvme/fdp/fdp.o 00:04:09.202 CXX test/cpp_headers/ioat_spec.o 00:04:09.202 CXX test/cpp_headers/iscsi_spec.o 00:04:09.202 CC test/nvme/cuse/cuse.o 00:04:09.460 CC examples/blob/hello_world/hello_blob.o 00:04:09.460 LINK hotplug 00:04:09.460 LINK arbitration 00:04:09.460 CXX test/cpp_headers/json.o 00:04:09.460 LINK fdp 00:04:09.719 CC examples/blob/cli/blobcli.o 00:04:09.719 CXX test/cpp_headers/jsonrpc.o 00:04:09.719 LINK accel_perf 00:04:09.719 CXX test/cpp_headers/keyring.o 00:04:09.719 LINK hello_blob 00:04:09.719 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:09.719 CXX test/cpp_headers/keyring_module.o 00:04:09.719 CC examples/nvme/abort/abort.o 00:04:09.977 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:09.977 LINK cmb_copy 00:04:09.977 CXX test/cpp_headers/likely.o 00:04:09.977 CXX test/cpp_headers/log.o 00:04:10.235 CXX test/cpp_headers/lvol.o 00:04:10.235 LINK blobcli 00:04:10.235 LINK pmr_persistence 00:04:10.235 LINK abort 00:04:10.235 CC examples/bdev/hello_world/hello_bdev.o 00:04:10.235 CXX test/cpp_headers/md5.o 00:04:10.235 CC examples/bdev/bdevperf/bdevperf.o 00:04:10.494 CXX test/cpp_headers/memory.o 00:04:10.494 CXX test/cpp_headers/mmio.o 00:04:10.494 CXX test/cpp_headers/nbd.o 00:04:10.494 CXX test/cpp_headers/net.o 00:04:10.494 CXX test/cpp_headers/notify.o 00:04:10.494 LINK hello_bdev 00:04:10.494 CXX test/cpp_headers/nvme.o 00:04:10.494 CXX test/cpp_headers/nvme_intel.o 00:04:10.494 CXX test/cpp_headers/nvme_ocssd.o 00:04:10.494 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:10.753 CXX test/cpp_headers/nvme_spec.o 00:04:10.753 CXX test/cpp_headers/nvme_zns.o 00:04:10.753 LINK cuse 00:04:10.753 CXX test/cpp_headers/nvmf_cmd.o 00:04:10.753 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:10.753 CXX test/cpp_headers/nvmf.o 00:04:10.753 CXX test/cpp_headers/nvmf_spec.o 00:04:10.753 CXX test/cpp_headers/nvmf_transport.o 00:04:11.011 CXX test/cpp_headers/opal.o 00:04:11.011 CXX test/cpp_headers/opal_spec.o 00:04:11.011 CXX test/cpp_headers/pci_ids.o 00:04:11.011 CXX test/cpp_headers/pipe.o 00:04:11.011 CXX test/cpp_headers/queue.o 00:04:11.011 CXX test/cpp_headers/reduce.o 00:04:11.011 CXX test/cpp_headers/rpc.o 00:04:11.011 CXX test/cpp_headers/scheduler.o 00:04:11.011 CXX test/cpp_headers/scsi.o 00:04:11.011 CXX test/cpp_headers/scsi_spec.o 00:04:11.011 CXX test/cpp_headers/sock.o 00:04:11.270 CXX test/cpp_headers/stdinc.o 00:04:11.270 CXX test/cpp_headers/string.o 00:04:11.270 CXX test/cpp_headers/thread.o 00:04:11.270 CXX test/cpp_headers/trace.o 00:04:11.270 CXX test/cpp_headers/trace_parser.o 00:04:11.270 LINK bdevperf 00:04:11.270 CXX test/cpp_headers/tree.o 00:04:11.270 CXX test/cpp_headers/ublk.o 00:04:11.270 CXX test/cpp_headers/util.o 00:04:11.270 CXX test/cpp_headers/uuid.o 00:04:11.270 CXX test/cpp_headers/version.o 00:04:11.270 CXX test/cpp_headers/vfio_user_pci.o 00:04:11.270 CXX test/cpp_headers/vfio_user_spec.o 00:04:11.270 CXX test/cpp_headers/vhost.o 00:04:11.270 CXX test/cpp_headers/vmd.o 00:04:11.270 LINK esnap 00:04:11.529 CXX test/cpp_headers/xor.o 00:04:11.529 CXX test/cpp_headers/zipf.o 00:04:11.787 CC examples/nvmf/nvmf/nvmf.o 00:04:12.046 LINK nvmf 00:04:12.304 00:04:12.304 real 1m32.342s 00:04:12.304 user 8m34.828s 00:04:12.304 sys 1m47.540s 00:04:12.304 11:24:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:12.304 ************************************ 00:04:12.304 END TEST make 00:04:12.304 11:24:17 make -- common/autotest_common.sh@10 -- $ set +x 00:04:12.304 ************************************ 00:04:12.304 11:24:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:12.304 11:24:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:12.304 11:24:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:12.304 11:24:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.304 11:24:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:12.304 11:24:17 -- pm/common@44 -- $ pid=5302 00:04:12.304 11:24:17 -- pm/common@50 -- $ kill -TERM 5302 00:04:12.304 11:24:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.304 11:24:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:12.304 11:24:17 -- pm/common@44 -- $ pid=5304 00:04:12.304 11:24:17 -- pm/common@50 -- $ kill -TERM 5304 00:04:12.304 11:24:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:12.304 11:24:17 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:12.304 11:24:17 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.304 11:24:17 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.304 11:24:17 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.304 11:24:17 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.304 11:24:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.304 11:24:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.304 11:24:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.304 11:24:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.304 11:24:17 -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.304 11:24:17 -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.304 11:24:17 -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.304 11:24:17 -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.304 11:24:17 -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.304 11:24:17 -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.304 11:24:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.304 11:24:17 -- scripts/common.sh@344 -- # case "$op" in 00:04:12.304 11:24:17 -- scripts/common.sh@345 -- # : 1 00:04:12.304 11:24:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.304 11:24:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.304 11:24:17 -- scripts/common.sh@365 -- # decimal 1 00:04:12.304 11:24:17 -- scripts/common.sh@353 -- # local d=1 00:04:12.304 11:24:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.304 11:24:17 -- scripts/common.sh@355 -- # echo 1 00:04:12.304 11:24:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.304 11:24:17 -- scripts/common.sh@366 -- # decimal 2 00:04:12.304 11:24:17 -- scripts/common.sh@353 -- # local d=2 00:04:12.304 11:24:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.304 11:24:17 -- scripts/common.sh@355 -- # echo 2 00:04:12.304 11:24:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.304 11:24:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.304 11:24:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.304 11:24:17 -- scripts/common.sh@368 -- # return 0 00:04:12.304 11:24:17 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.304 11:24:17 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.304 --rc genhtml_branch_coverage=1 00:04:12.304 --rc genhtml_function_coverage=1 00:04:12.304 --rc genhtml_legend=1 00:04:12.305 --rc geninfo_all_blocks=1 00:04:12.305 --rc geninfo_unexecuted_blocks=1 00:04:12.305 00:04:12.305 ' 00:04:12.305 11:24:17 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.305 --rc genhtml_branch_coverage=1 00:04:12.305 --rc genhtml_function_coverage=1 00:04:12.305 --rc genhtml_legend=1 00:04:12.305 --rc geninfo_all_blocks=1 00:04:12.305 --rc geninfo_unexecuted_blocks=1 00:04:12.305 00:04:12.305 ' 00:04:12.305 11:24:17 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.305 --rc genhtml_branch_coverage=1 00:04:12.305 --rc genhtml_function_coverage=1 00:04:12.305 --rc genhtml_legend=1 00:04:12.305 --rc geninfo_all_blocks=1 00:04:12.305 --rc geninfo_unexecuted_blocks=1 00:04:12.305 00:04:12.305 ' 00:04:12.305 11:24:17 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.305 --rc genhtml_branch_coverage=1 00:04:12.305 --rc genhtml_function_coverage=1 00:04:12.305 --rc genhtml_legend=1 00:04:12.305 --rc geninfo_all_blocks=1 00:04:12.305 --rc geninfo_unexecuted_blocks=1 00:04:12.305 00:04:12.305 ' 00:04:12.305 11:24:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:12.305 11:24:17 -- nvmf/common.sh@7 -- # uname -s 00:04:12.563 11:24:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.563 11:24:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.563 11:24:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.563 11:24:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.563 11:24:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.563 11:24:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.563 11:24:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.563 11:24:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.563 11:24:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.563 11:24:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.563 11:24:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:04:12.563 11:24:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:04:12.563 11:24:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.563 11:24:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.563 11:24:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:12.563 11:24:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.563 11:24:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.563 11:24:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.563 11:24:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.563 11:24:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.563 11:24:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.563 11:24:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.563 11:24:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.564 11:24:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.564 11:24:17 -- paths/export.sh@5 -- # export PATH 00:04:12.564 11:24:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.564 11:24:17 -- nvmf/common.sh@51 -- # : 0 00:04:12.564 11:24:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:12.564 11:24:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:12.564 11:24:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.564 11:24:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.564 11:24:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.564 11:24:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:12.564 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:12.564 11:24:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:12.564 11:24:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:12.564 11:24:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:12.564 11:24:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:12.564 11:24:17 -- spdk/autotest.sh@32 -- # uname -s 00:04:12.564 11:24:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:12.564 11:24:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:12.564 11:24:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:12.564 11:24:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:12.564 11:24:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:12.564 11:24:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:12.564 11:24:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:12.564 11:24:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:12.564 11:24:17 -- spdk/autotest.sh@48 -- # udevadm_pid=57924 00:04:12.564 11:24:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:12.564 11:24:17 -- pm/common@17 -- # local monitor 00:04:12.564 11:24:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.564 11:24:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:12.564 11:24:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.564 11:24:17 -- pm/common@25 -- # sleep 1 00:04:12.564 11:24:17 -- pm/common@21 -- # date +%s 00:04:12.564 11:24:17 -- pm/common@21 -- # date +%s 00:04:12.564 11:24:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734348257 00:04:12.564 11:24:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734348257 00:04:12.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734348257_collect-vmstat.pm.log 00:04:12.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734348257_collect-cpu-load.pm.log 00:04:13.499 11:24:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:13.499 11:24:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:13.499 11:24:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.499 11:24:18 -- common/autotest_common.sh@10 -- # set +x 00:04:13.499 11:24:18 -- spdk/autotest.sh@59 -- # create_test_list 00:04:13.499 11:24:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:13.499 11:24:18 -- common/autotest_common.sh@10 -- # set +x 00:04:13.499 11:24:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:13.499 11:24:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:13.499 11:24:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:13.499 11:24:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:13.499 11:24:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:13.499 11:24:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:13.499 11:24:18 -- common/autotest_common.sh@1457 -- # uname 00:04:13.499 11:24:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:13.499 11:24:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:13.499 11:24:18 -- common/autotest_common.sh@1477 -- # uname 00:04:13.499 11:24:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:13.499 11:24:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:13.499 11:24:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:13.763 lcov: LCOV version 1.15 00:04:13.763 11:24:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:31.911 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:31.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.812 11:24:51 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:46.812 11:24:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.812 11:24:51 -- common/autotest_common.sh@10 -- # set +x 00:04:46.812 11:24:51 -- spdk/autotest.sh@78 -- # rm -f 00:04:46.812 11:24:51 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.429 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:47.429 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:47.429 11:24:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:47.429 11:24:52 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:47.429 11:24:52 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:47.429 11:24:52 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:47.429 11:24:52 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:47.429 11:24:52 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:47.429 11:24:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:47.429 11:24:52 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:47.429 11:24:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.429 11:24:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:47.429 11:24:52 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:47.429 11:24:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.429 11:24:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.429 11:24:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:47.429 11:24:52 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:47.429 11:24:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.429 11:24:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:47.429 11:24:52 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:47.429 11:24:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:47.429 11:24:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.429 11:24:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.429 11:24:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:47.429 11:24:52 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:47.429 11:24:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:47.429 11:24:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.429 11:24:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.429 11:24:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:47.429 11:24:52 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:47.429 11:24:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:47.429 11:24:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.429 11:24:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:47.429 11:24:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.429 11:24:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.429 11:24:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:47.429 11:24:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:47.429 11:24:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:47.429 No valid GPT data, bailing 00:04:47.429 11:24:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.429 11:24:52 -- scripts/common.sh@394 -- # pt= 00:04:47.429 11:24:52 -- scripts/common.sh@395 -- # return 1 00:04:47.429 11:24:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:47.429 1+0 records in 00:04:47.429 1+0 records out 00:04:47.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503529 s, 208 MB/s 00:04:47.429 11:24:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.429 11:24:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.429 11:24:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:47.429 11:24:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:47.429 11:24:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:47.429 No valid GPT data, bailing 00:04:47.429 11:24:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:47.429 11:24:52 -- scripts/common.sh@394 -- # pt= 00:04:47.429 11:24:52 -- scripts/common.sh@395 -- # return 1 00:04:47.429 11:24:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:47.429 1+0 records in 00:04:47.429 1+0 records out 00:04:47.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467021 s, 225 MB/s 00:04:47.429 11:24:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.429 11:24:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.429 11:24:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:47.429 11:24:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:47.429 11:24:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:47.688 No valid GPT data, bailing 00:04:47.688 11:24:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:47.688 11:24:52 -- scripts/common.sh@394 -- # pt= 00:04:47.688 11:24:52 -- scripts/common.sh@395 -- # return 1 00:04:47.688 11:24:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:47.688 1+0 records in 00:04:47.688 1+0 records out 00:04:47.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434229 s, 241 MB/s 00:04:47.688 11:24:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.688 11:24:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.688 11:24:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:47.688 11:24:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:47.688 11:24:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:47.688 No valid GPT data, bailing 00:04:47.688 11:24:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:47.688 11:24:52 -- scripts/common.sh@394 -- # pt= 00:04:47.688 11:24:52 -- scripts/common.sh@395 -- # return 1 00:04:47.688 11:24:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:47.688 1+0 records in 00:04:47.688 1+0 records out 00:04:47.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421058 s, 249 MB/s 00:04:47.688 11:24:52 -- spdk/autotest.sh@105 -- # sync 00:04:47.688 11:24:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:47.688 11:24:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:47.688 11:24:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:50.221 11:24:55 -- spdk/autotest.sh@111 -- # uname -s 00:04:50.221 11:24:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:50.221 11:24:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:50.221 11:24:55 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:50.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.479 Hugepages 00:04:50.479 node hugesize free / total 00:04:50.479 node0 1048576kB 0 / 0 00:04:50.479 node0 2048kB 0 / 0 00:04:50.479 00:04:50.479 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.479 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:50.479 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:50.479 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:50.479 11:24:55 -- spdk/autotest.sh@117 -- # uname -s 00:04:50.479 11:24:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:50.479 11:24:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:50.479 11:24:55 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.442 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.442 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.442 11:24:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:52.817 11:24:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:52.817 11:24:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:52.817 11:24:57 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.817 11:24:57 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:52.817 11:24:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:52.817 11:24:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:52.817 11:24:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.817 11:24:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:52.817 11:24:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.817 11:24:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:52.817 11:24:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:52.817 11:24:57 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.817 Waiting for block devices as requested 00:04:53.075 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.075 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.075 11:24:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:53.075 11:24:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:53.075 11:24:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:53.075 11:24:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:53.075 11:24:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:53.075 11:24:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:53.075 11:24:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:53.075 11:24:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.075 11:24:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:53.075 11:24:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:53.075 11:24:58 -- common/autotest_common.sh@1543 -- # continue 00:04:53.075 11:24:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:53.075 11:24:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:53.075 11:24:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:53.075 11:24:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:53.075 11:24:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:53.075 11:24:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:53.075 11:24:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:53.075 11:24:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:53.075 11:24:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:53.075 11:24:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:53.075 11:24:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.333 11:24:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:53.333 11:24:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:53.333 11:24:58 -- common/autotest_common.sh@1543 -- # continue 00:04:53.333 11:24:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:53.333 11:24:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.333 11:24:58 -- common/autotest_common.sh@10 -- # set +x 00:04:53.333 11:24:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:53.333 11:24:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.333 11:24:58 -- common/autotest_common.sh@10 -- # set +x 00:04:53.333 11:24:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.899 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.157 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.157 11:24:59 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:54.157 11:24:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.157 11:24:59 -- common/autotest_common.sh@10 -- # set +x 00:04:54.157 11:24:59 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:54.157 11:24:59 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:54.157 11:24:59 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.157 11:24:59 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:54.157 11:24:59 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:54.157 11:24:59 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:54.157 11:24:59 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:54.157 11:24:59 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:54.157 11:24:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:54.157 11:24:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:54.157 11:24:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.157 11:24:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.157 11:24:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:54.157 11:24:59 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:54.157 11:24:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:54.157 11:24:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:54.157 11:24:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:54.157 11:24:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:54.157 11:24:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.157 11:24:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:54.157 11:24:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:54.157 11:24:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:54.157 11:24:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.157 11:24:59 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:54.157 11:24:59 -- common/autotest_common.sh@1572 -- # return 0 00:04:54.157 11:24:59 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:54.157 11:24:59 -- common/autotest_common.sh@1580 -- # return 0 00:04:54.157 11:24:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:54.157 11:24:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:54.157 11:24:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.157 11:24:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.157 11:24:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:54.157 11:24:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.157 11:24:59 -- common/autotest_common.sh@10 -- # set +x 00:04:54.157 11:24:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:54.157 11:24:59 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.157 11:24:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.157 11:24:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.157 11:24:59 -- common/autotest_common.sh@10 -- # set +x 00:04:54.158 ************************************ 00:04:54.158 START TEST env 00:04:54.158 ************************************ 00:04:54.158 11:24:59 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.416 * Looking for test storage... 00:04:54.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.416 11:24:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.416 11:24:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.416 11:24:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.416 11:24:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.416 11:24:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.416 11:24:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.416 11:24:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.416 11:24:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.416 11:24:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.416 11:24:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.416 11:24:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.416 11:24:59 env -- scripts/common.sh@344 -- # case "$op" in 00:04:54.416 11:24:59 env -- scripts/common.sh@345 -- # : 1 00:04:54.416 11:24:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.416 11:24:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.416 11:24:59 env -- scripts/common.sh@365 -- # decimal 1 00:04:54.416 11:24:59 env -- scripts/common.sh@353 -- # local d=1 00:04:54.416 11:24:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.416 11:24:59 env -- scripts/common.sh@355 -- # echo 1 00:04:54.416 11:24:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.416 11:24:59 env -- scripts/common.sh@366 -- # decimal 2 00:04:54.416 11:24:59 env -- scripts/common.sh@353 -- # local d=2 00:04:54.416 11:24:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.416 11:24:59 env -- scripts/common.sh@355 -- # echo 2 00:04:54.416 11:24:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.416 11:24:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.416 11:24:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.416 11:24:59 env -- scripts/common.sh@368 -- # return 0 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.416 --rc genhtml_branch_coverage=1 00:04:54.416 --rc genhtml_function_coverage=1 00:04:54.416 --rc genhtml_legend=1 00:04:54.416 --rc geninfo_all_blocks=1 00:04:54.416 --rc geninfo_unexecuted_blocks=1 00:04:54.416 00:04:54.416 ' 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.416 --rc genhtml_branch_coverage=1 00:04:54.416 --rc genhtml_function_coverage=1 00:04:54.416 --rc genhtml_legend=1 00:04:54.416 --rc geninfo_all_blocks=1 00:04:54.416 --rc geninfo_unexecuted_blocks=1 00:04:54.416 00:04:54.416 ' 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.416 --rc genhtml_branch_coverage=1 00:04:54.416 --rc genhtml_function_coverage=1 00:04:54.416 --rc genhtml_legend=1 00:04:54.416 --rc geninfo_all_blocks=1 00:04:54.416 --rc geninfo_unexecuted_blocks=1 00:04:54.416 00:04:54.416 ' 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.416 --rc genhtml_branch_coverage=1 00:04:54.416 --rc genhtml_function_coverage=1 00:04:54.416 --rc genhtml_legend=1 00:04:54.416 --rc geninfo_all_blocks=1 00:04:54.416 --rc geninfo_unexecuted_blocks=1 00:04:54.416 00:04:54.416 ' 00:04:54.416 11:24:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.416 11:24:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.416 11:24:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.416 ************************************ 00:04:54.416 START TEST env_memory 00:04:54.416 ************************************ 00:04:54.416 11:24:59 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.416 00:04:54.416 00:04:54.416 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.416 http://cunit.sourceforge.net/ 00:04:54.416 00:04:54.416 00:04:54.416 Suite: memory 00:04:54.416 Test: alloc and free memory map ...[2024-12-16 11:24:59.739745] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.416 passed 00:04:54.416 Test: mem map translation ...[2024-12-16 11:24:59.772833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.416 [2024-12-16 11:24:59.772914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.416 [2024-12-16 11:24:59.772986] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.416 [2024-12-16 11:24:59.773032] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.416 passed 00:04:54.674 Test: mem map registration ...[2024-12-16 11:24:59.834240] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:54.674 [2024-12-16 11:24:59.834337] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:54.674 passed 00:04:54.674 Test: mem map adjacent registrations ...passed 00:04:54.674 00:04:54.674 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.674 suites 1 1 n/a 0 0 00:04:54.674 tests 4 4 4 0 0 00:04:54.674 asserts 152 152 152 0 n/a 00:04:54.674 00:04:54.674 Elapsed time = 0.191 seconds 00:04:54.674 00:04:54.674 real 0m0.213s 00:04:54.674 user 0m0.189s 00:04:54.674 sys 0m0.017s 00:04:54.674 11:24:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.674 11:24:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:54.674 ************************************ 00:04:54.674 END TEST env_memory 00:04:54.674 ************************************ 00:04:54.674 11:24:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:54.674 11:24:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.674 11:24:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.674 11:24:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.674 ************************************ 00:04:54.674 START TEST env_vtophys 00:04:54.674 ************************************ 00:04:54.674 11:24:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:54.674 EAL: lib.eal log level changed from notice to debug 00:04:54.674 EAL: Detected lcore 0 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 1 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 2 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 3 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 4 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 5 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 6 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 7 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 8 as core 0 on socket 0 00:04:54.674 EAL: Detected lcore 9 as core 0 on socket 0 00:04:54.674 EAL: Maximum logical cores by configuration: 128 00:04:54.674 EAL: Detected CPU lcores: 10 00:04:54.674 EAL: Detected NUMA nodes: 1 00:04:54.674 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:54.674 EAL: Detected shared linkage of DPDK 00:04:54.674 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.674 EAL: Selected IOVA mode 'PA' 00:04:54.674 EAL: Probing VFIO support... 00:04:54.674 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:54.674 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:54.674 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.674 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.674 EAL: Setting up physically contiguous memory... 00:04:54.674 EAL: Setting maximum number of open files to 524288 00:04:54.674 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.674 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.674 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.675 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.675 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.675 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.675 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.675 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.675 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.675 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.675 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.675 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.675 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.675 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.675 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.675 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.675 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.675 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.675 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.675 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.675 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.675 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.675 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.675 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.675 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.675 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.675 EAL: Hugepages will be freed exactly as allocated. 00:04:54.675 EAL: No shared files mode enabled, IPC is disabled 00:04:54.675 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: TSC frequency is ~2200000 KHz 00:04:54.933 EAL: Main lcore 0 is ready (tid=7f09e8c13a00;cpuset=[0]) 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 0 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.933 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:54.933 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:54.933 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.933 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:54.933 00:04:54.933 00:04:54.933 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.933 http://cunit.sourceforge.net/ 00:04:54.933 00:04:54.933 00:04:54.933 Suite: components_suite 00:04:54.933 Test: vtophys_malloc_test ...passed 00:04:54.933 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 4 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 4 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 4 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 4 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 4 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 4 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 66MB 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was shrunk by 66MB 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.933 EAL: Restoring previous memory policy: 4 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was expanded by 130MB 00:04:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.933 EAL: request: mp_malloc_sync 00:04:54.933 EAL: No shared files mode enabled, IPC is disabled 00:04:54.933 EAL: Heap on socket 0 was shrunk by 130MB 00:04:54.933 EAL: Trying to obtain current memory policy. 00:04:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.191 EAL: Restoring previous memory policy: 4 00:04:55.191 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.191 EAL: request: mp_malloc_sync 00:04:55.191 EAL: No shared files mode enabled, IPC is disabled 00:04:55.191 EAL: Heap on socket 0 was expanded by 258MB 00:04:55.191 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.191 EAL: request: mp_malloc_sync 00:04:55.191 EAL: No shared files mode enabled, IPC is disabled 00:04:55.191 EAL: Heap on socket 0 was shrunk by 258MB 00:04:55.191 EAL: Trying to obtain current memory policy. 00:04:55.191 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.449 EAL: Restoring previous memory policy: 4 00:04:55.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.449 EAL: request: mp_malloc_sync 00:04:55.449 EAL: No shared files mode enabled, IPC is disabled 00:04:55.449 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.449 EAL: request: mp_malloc_sync 00:04:55.449 EAL: No shared files mode enabled, IPC is disabled 00:04:55.449 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.449 EAL: Trying to obtain current memory policy. 00:04:55.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.707 EAL: Restoring previous memory policy: 4 00:04:55.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.707 EAL: request: mp_malloc_sync 00:04:55.707 EAL: No shared files mode enabled, IPC is disabled 00:04:55.707 EAL: Heap on socket 0 was expanded by 1026MB 00:04:55.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.246 EAL: request: mp_malloc_sync 00:04:56.246 EAL: No shared files mode enabled, IPC is disabled 00:04:56.246 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.246 passed 00:04:56.246 00:04:56.246 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.246 suites 1 1 n/a 0 0 00:04:56.246 tests 2 2 2 0 0 00:04:56.246 asserts 5379 5379 5379 0 n/a 00:04:56.246 00:04:56.246 Elapsed time = 1.340 seconds 00:04:56.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.246 EAL: request: mp_malloc_sync 00:04:56.246 EAL: No shared files mode enabled, IPC is disabled 00:04:56.246 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.246 EAL: No shared files mode enabled, IPC is disabled 00:04:56.246 EAL: No shared files mode enabled, IPC is disabled 00:04:56.246 EAL: No shared files mode enabled, IPC is disabled 00:04:56.246 00:04:56.246 real 0m1.552s 00:04:56.246 user 0m0.844s 00:04:56.246 sys 0m0.572s 00:04:56.246 11:25:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.246 11:25:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 ************************************ 00:04:56.246 END TEST env_vtophys 00:04:56.246 ************************************ 00:04:56.246 11:25:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.246 11:25:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.246 11:25:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.246 11:25:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 ************************************ 00:04:56.246 START TEST env_pci 00:04:56.246 ************************************ 00:04:56.246 11:25:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.246 00:04:56.246 00:04:56.246 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.246 http://cunit.sourceforge.net/ 00:04:56.246 00:04:56.246 00:04:56.246 Suite: pci 00:04:56.246 Test: pci_hook ...[2024-12-16 11:25:01.578753] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60194 has claimed it 00:04:56.246 passed 00:04:56.246 00:04:56.246 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.246 suites 1 1 n/a 0 0 00:04:56.246 tests 1 1 1 0 0 00:04:56.246 asserts 25 25 25 0 n/a 00:04:56.246 00:04:56.246 Elapsed time = 0.002 seconds 00:04:56.246 EAL: Cannot find device (10000:00:01.0) 00:04:56.246 EAL: Failed to attach device on primary process 00:04:56.246 00:04:56.246 real 0m0.021s 00:04:56.246 user 0m0.010s 00:04:56.246 sys 0m0.011s 00:04:56.246 11:25:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.246 11:25:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 ************************************ 00:04:56.246 END TEST env_pci 00:04:56.246 ************************************ 00:04:56.246 11:25:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.246 11:25:01 env -- env/env.sh@15 -- # uname 00:04:56.246 11:25:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.246 11:25:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.246 11:25:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.246 11:25:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:56.246 11:25:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.246 11:25:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 ************************************ 00:04:56.246 START TEST env_dpdk_post_init 00:04:56.246 ************************************ 00:04:56.246 11:25:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.528 EAL: Detected CPU lcores: 10 00:04:56.528 EAL: Detected NUMA nodes: 1 00:04:56.528 EAL: Detected shared linkage of DPDK 00:04:56.528 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.528 EAL: Selected IOVA mode 'PA' 00:04:56.528 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.528 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.528 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.528 Starting DPDK initialization... 00:04:56.528 Starting SPDK post initialization... 00:04:56.528 SPDK NVMe probe 00:04:56.528 Attaching to 0000:00:10.0 00:04:56.528 Attaching to 0000:00:11.0 00:04:56.528 Attached to 0000:00:10.0 00:04:56.528 Attached to 0000:00:11.0 00:04:56.528 Cleaning up... 00:04:56.528 00:04:56.528 real 0m0.182s 00:04:56.528 user 0m0.051s 00:04:56.528 sys 0m0.030s 00:04:56.528 11:25:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.528 11:25:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.528 ************************************ 00:04:56.528 END TEST env_dpdk_post_init 00:04:56.528 ************************************ 00:04:56.528 11:25:01 env -- env/env.sh@26 -- # uname 00:04:56.528 11:25:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.528 11:25:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.528 11:25:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.528 11:25:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.528 11:25:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.528 ************************************ 00:04:56.528 START TEST env_mem_callbacks 00:04:56.528 ************************************ 00:04:56.528 11:25:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.528 EAL: Detected CPU lcores: 10 00:04:56.528 EAL: Detected NUMA nodes: 1 00:04:56.528 EAL: Detected shared linkage of DPDK 00:04:56.528 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.528 EAL: Selected IOVA mode 'PA' 00:04:56.786 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.786 00:04:56.786 00:04:56.786 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.786 http://cunit.sourceforge.net/ 00:04:56.786 00:04:56.786 00:04:56.786 Suite: memory 00:04:56.786 Test: test ... 00:04:56.786 register 0x200000200000 2097152 00:04:56.786 malloc 3145728 00:04:56.786 register 0x200000400000 4194304 00:04:56.786 buf 0x200000500000 len 3145728 PASSED 00:04:56.786 malloc 64 00:04:56.786 buf 0x2000004fff40 len 64 PASSED 00:04:56.786 malloc 4194304 00:04:56.786 register 0x200000800000 6291456 00:04:56.786 buf 0x200000a00000 len 4194304 PASSED 00:04:56.786 free 0x200000500000 3145728 00:04:56.786 free 0x2000004fff40 64 00:04:56.786 unregister 0x200000400000 4194304 PASSED 00:04:56.786 free 0x200000a00000 4194304 00:04:56.786 unregister 0x200000800000 6291456 PASSED 00:04:56.786 malloc 8388608 00:04:56.786 register 0x200000400000 10485760 00:04:56.786 buf 0x200000600000 len 8388608 PASSED 00:04:56.786 free 0x200000600000 8388608 00:04:56.786 unregister 0x200000400000 10485760 PASSED 00:04:56.786 passed 00:04:56.786 00:04:56.786 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.786 suites 1 1 n/a 0 0 00:04:56.786 tests 1 1 1 0 0 00:04:56.786 asserts 15 15 15 0 n/a 00:04:56.786 00:04:56.786 Elapsed time = 0.009 seconds 00:04:56.786 00:04:56.786 real 0m0.159s 00:04:56.786 user 0m0.024s 00:04:56.786 sys 0m0.032s 00:04:56.786 11:25:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.786 11:25:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.786 ************************************ 00:04:56.786 END TEST env_mem_callbacks 00:04:56.786 ************************************ 00:04:56.786 00:04:56.786 real 0m2.567s 00:04:56.786 user 0m1.318s 00:04:56.786 sys 0m0.888s 00:04:56.786 11:25:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.786 11:25:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.786 ************************************ 00:04:56.786 END TEST env 00:04:56.786 ************************************ 00:04:56.786 11:25:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.786 11:25:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.786 11:25:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.786 11:25:02 -- common/autotest_common.sh@10 -- # set +x 00:04:56.786 ************************************ 00:04:56.786 START TEST rpc 00:04:56.786 ************************************ 00:04:56.786 11:25:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.786 * Looking for test storage... 00:04:56.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.786 11:25:02 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.044 11:25:02 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.044 11:25:02 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.044 11:25:02 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.044 11:25:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.044 11:25:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.044 11:25:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.044 11:25:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.044 11:25:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.044 11:25:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.044 11:25:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.044 11:25:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.044 11:25:02 rpc -- scripts/common.sh@345 -- # : 1 00:04:57.044 11:25:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.044 11:25:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.044 11:25:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.044 11:25:02 rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.044 11:25:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.044 11:25:02 rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.044 11:25:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.044 11:25:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.044 11:25:02 rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.044 11:25:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.044 11:25:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.044 11:25:02 rpc -- scripts/common.sh@368 -- # return 0 00:04:57.044 11:25:02 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.044 11:25:02 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.044 --rc genhtml_branch_coverage=1 00:04:57.044 --rc genhtml_function_coverage=1 00:04:57.044 --rc genhtml_legend=1 00:04:57.044 --rc geninfo_all_blocks=1 00:04:57.044 --rc geninfo_unexecuted_blocks=1 00:04:57.044 00:04:57.044 ' 00:04:57.044 11:25:02 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.044 --rc genhtml_branch_coverage=1 00:04:57.044 --rc genhtml_function_coverage=1 00:04:57.044 --rc genhtml_legend=1 00:04:57.044 --rc geninfo_all_blocks=1 00:04:57.044 --rc geninfo_unexecuted_blocks=1 00:04:57.044 00:04:57.044 ' 00:04:57.044 11:25:02 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.044 --rc genhtml_branch_coverage=1 00:04:57.044 --rc genhtml_function_coverage=1 00:04:57.044 --rc genhtml_legend=1 00:04:57.044 --rc geninfo_all_blocks=1 00:04:57.045 --rc geninfo_unexecuted_blocks=1 00:04:57.045 00:04:57.045 ' 00:04:57.045 11:25:02 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.045 --rc genhtml_branch_coverage=1 00:04:57.045 --rc genhtml_function_coverage=1 00:04:57.045 --rc genhtml_legend=1 00:04:57.045 --rc geninfo_all_blocks=1 00:04:57.045 --rc geninfo_unexecuted_blocks=1 00:04:57.045 00:04:57.045 ' 00:04:57.045 11:25:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60317 00:04:57.045 11:25:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.045 11:25:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60317 00:04:57.045 11:25:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:57.045 11:25:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 60317 ']' 00:04:57.045 11:25:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.045 11:25:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.045 11:25:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.045 11:25:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.045 11:25:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.045 [2024-12-16 11:25:02.377629] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:04:57.045 [2024-12-16 11:25:02.377748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60317 ] 00:04:57.302 [2024-12-16 11:25:02.522775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.302 [2024-12-16 11:25:02.589979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.302 [2024-12-16 11:25:02.590062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60317' to capture a snapshot of events at runtime. 00:04:57.302 [2024-12-16 11:25:02.590074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.302 [2024-12-16 11:25:02.590084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.302 [2024-12-16 11:25:02.590091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60317 for offline analysis/debug. 00:04:57.302 [2024-12-16 11:25:02.590524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.235 11:25:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.235 11:25:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.235 11:25:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.235 11:25:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.235 11:25:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.235 11:25:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.235 11:25:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.235 11:25:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.235 11:25:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.235 ************************************ 00:04:58.235 START TEST rpc_integrity 00:04:58.235 ************************************ 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.235 { 00:04:58.235 "aliases": [ 00:04:58.235 "5d2f2187-b936-4146-a460-b96ed84fd7d3" 00:04:58.235 ], 00:04:58.235 "assigned_rate_limits": { 00:04:58.235 "r_mbytes_per_sec": 0, 00:04:58.235 "rw_ios_per_sec": 0, 00:04:58.235 "rw_mbytes_per_sec": 0, 00:04:58.235 "w_mbytes_per_sec": 0 00:04:58.235 }, 00:04:58.235 "block_size": 512, 00:04:58.235 "claimed": false, 00:04:58.235 "driver_specific": {}, 00:04:58.235 "memory_domains": [ 00:04:58.235 { 00:04:58.235 "dma_device_id": "system", 00:04:58.235 "dma_device_type": 1 00:04:58.235 }, 00:04:58.235 { 00:04:58.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.235 "dma_device_type": 2 00:04:58.235 } 00:04:58.235 ], 00:04:58.235 "name": "Malloc0", 00:04:58.235 "num_blocks": 16384, 00:04:58.235 "product_name": "Malloc disk", 00:04:58.235 "supported_io_types": { 00:04:58.235 "abort": true, 00:04:58.235 "compare": false, 00:04:58.235 "compare_and_write": false, 00:04:58.235 "copy": true, 00:04:58.235 "flush": true, 00:04:58.235 "get_zone_info": false, 00:04:58.235 "nvme_admin": false, 00:04:58.235 "nvme_io": false, 00:04:58.235 "nvme_io_md": false, 00:04:58.235 "nvme_iov_md": false, 00:04:58.235 "read": true, 00:04:58.235 "reset": true, 00:04:58.235 "seek_data": false, 00:04:58.235 "seek_hole": false, 00:04:58.235 "unmap": true, 00:04:58.235 "write": true, 00:04:58.235 "write_zeroes": true, 00:04:58.235 "zcopy": true, 00:04:58.235 "zone_append": false, 00:04:58.235 "zone_management": false 00:04:58.235 }, 00:04:58.235 "uuid": "5d2f2187-b936-4146-a460-b96ed84fd7d3", 00:04:58.235 "zoned": false 00:04:58.235 } 00:04:58.235 ]' 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.235 [2024-12-16 11:25:03.550131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.235 [2024-12-16 11:25:03.550230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.235 [2024-12-16 11:25:03.550259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa4b440 00:04:58.235 [2024-12-16 11:25:03.550270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.235 [2024-12-16 11:25:03.552284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.235 [2024-12-16 11:25:03.552321] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.235 Passthru0 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.235 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.235 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.235 { 00:04:58.235 "aliases": [ 00:04:58.235 "5d2f2187-b936-4146-a460-b96ed84fd7d3" 00:04:58.235 ], 00:04:58.235 "assigned_rate_limits": { 00:04:58.235 "r_mbytes_per_sec": 0, 00:04:58.235 "rw_ios_per_sec": 0, 00:04:58.235 "rw_mbytes_per_sec": 0, 00:04:58.235 "w_mbytes_per_sec": 0 00:04:58.235 }, 00:04:58.235 "block_size": 512, 00:04:58.235 "claim_type": "exclusive_write", 00:04:58.235 "claimed": true, 00:04:58.235 "driver_specific": {}, 00:04:58.235 "memory_domains": [ 00:04:58.235 { 00:04:58.235 "dma_device_id": "system", 00:04:58.235 "dma_device_type": 1 00:04:58.235 }, 00:04:58.235 { 00:04:58.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.235 "dma_device_type": 2 00:04:58.235 } 00:04:58.235 ], 00:04:58.235 "name": "Malloc0", 00:04:58.235 "num_blocks": 16384, 00:04:58.235 "product_name": "Malloc disk", 00:04:58.235 "supported_io_types": { 00:04:58.235 "abort": true, 00:04:58.235 "compare": false, 00:04:58.235 "compare_and_write": false, 00:04:58.235 "copy": true, 00:04:58.235 "flush": true, 00:04:58.235 "get_zone_info": false, 00:04:58.235 "nvme_admin": false, 00:04:58.235 "nvme_io": false, 00:04:58.235 "nvme_io_md": false, 00:04:58.235 "nvme_iov_md": false, 00:04:58.235 "read": true, 00:04:58.235 "reset": true, 00:04:58.235 "seek_data": false, 00:04:58.235 "seek_hole": false, 00:04:58.235 "unmap": true, 00:04:58.235 "write": true, 00:04:58.235 "write_zeroes": true, 00:04:58.235 "zcopy": true, 00:04:58.235 "zone_append": false, 00:04:58.235 "zone_management": false 00:04:58.235 }, 00:04:58.235 "uuid": "5d2f2187-b936-4146-a460-b96ed84fd7d3", 00:04:58.235 "zoned": false 00:04:58.235 }, 00:04:58.235 { 00:04:58.235 "aliases": [ 00:04:58.236 "4dfe114f-be3d-5f28-8540-a697c68d57fc" 00:04:58.236 ], 00:04:58.236 "assigned_rate_limits": { 00:04:58.236 "r_mbytes_per_sec": 0, 00:04:58.236 "rw_ios_per_sec": 0, 00:04:58.236 "rw_mbytes_per_sec": 0, 00:04:58.236 "w_mbytes_per_sec": 0 00:04:58.236 }, 00:04:58.236 "block_size": 512, 00:04:58.236 "claimed": false, 00:04:58.236 "driver_specific": { 00:04:58.236 "passthru": { 00:04:58.236 "base_bdev_name": "Malloc0", 00:04:58.236 "name": "Passthru0" 00:04:58.236 } 00:04:58.236 }, 00:04:58.236 "memory_domains": [ 00:04:58.236 { 00:04:58.236 "dma_device_id": "system", 00:04:58.236 "dma_device_type": 1 00:04:58.236 }, 00:04:58.236 { 00:04:58.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.236 "dma_device_type": 2 00:04:58.236 } 00:04:58.236 ], 00:04:58.236 "name": "Passthru0", 00:04:58.236 "num_blocks": 16384, 00:04:58.236 "product_name": "passthru", 00:04:58.236 "supported_io_types": { 00:04:58.236 "abort": true, 00:04:58.236 "compare": false, 00:04:58.236 "compare_and_write": false, 00:04:58.236 "copy": true, 00:04:58.236 "flush": true, 00:04:58.236 "get_zone_info": false, 00:04:58.236 "nvme_admin": false, 00:04:58.236 "nvme_io": false, 00:04:58.236 "nvme_io_md": false, 00:04:58.236 "nvme_iov_md": false, 00:04:58.236 "read": true, 00:04:58.236 "reset": true, 00:04:58.236 "seek_data": false, 00:04:58.236 "seek_hole": false, 00:04:58.236 "unmap": true, 00:04:58.236 "write": true, 00:04:58.236 "write_zeroes": true, 00:04:58.236 "zcopy": true, 00:04:58.236 "zone_append": false, 00:04:58.236 "zone_management": false 00:04:58.236 }, 00:04:58.236 "uuid": "4dfe114f-be3d-5f28-8540-a697c68d57fc", 00:04:58.236 "zoned": false 00:04:58.236 } 00:04:58.236 ]' 00:04:58.236 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.236 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.236 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.236 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.236 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.236 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.236 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.236 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.236 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.493 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.493 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.493 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.493 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.494 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.494 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.494 11:25:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.494 00:04:58.494 real 0m0.327s 00:04:58.494 user 0m0.209s 00:04:58.494 sys 0m0.040s 00:04:58.494 ************************************ 00:04:58.494 END TEST rpc_integrity 00:04:58.494 ************************************ 00:04:58.494 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.494 11:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 11:25:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.494 11:25:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.494 11:25:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.494 11:25:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 ************************************ 00:04:58.494 START TEST rpc_plugins 00:04:58.494 ************************************ 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.494 { 00:04:58.494 "aliases": [ 00:04:58.494 "04c159ba-cf4d-4051-9427-37e53569b66e" 00:04:58.494 ], 00:04:58.494 "assigned_rate_limits": { 00:04:58.494 "r_mbytes_per_sec": 0, 00:04:58.494 "rw_ios_per_sec": 0, 00:04:58.494 "rw_mbytes_per_sec": 0, 00:04:58.494 "w_mbytes_per_sec": 0 00:04:58.494 }, 00:04:58.494 "block_size": 4096, 00:04:58.494 "claimed": false, 00:04:58.494 "driver_specific": {}, 00:04:58.494 "memory_domains": [ 00:04:58.494 { 00:04:58.494 "dma_device_id": "system", 00:04:58.494 "dma_device_type": 1 00:04:58.494 }, 00:04:58.494 { 00:04:58.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.494 "dma_device_type": 2 00:04:58.494 } 00:04:58.494 ], 00:04:58.494 "name": "Malloc1", 00:04:58.494 "num_blocks": 256, 00:04:58.494 "product_name": "Malloc disk", 00:04:58.494 "supported_io_types": { 00:04:58.494 "abort": true, 00:04:58.494 "compare": false, 00:04:58.494 "compare_and_write": false, 00:04:58.494 "copy": true, 00:04:58.494 "flush": true, 00:04:58.494 "get_zone_info": false, 00:04:58.494 "nvme_admin": false, 00:04:58.494 "nvme_io": false, 00:04:58.494 "nvme_io_md": false, 00:04:58.494 "nvme_iov_md": false, 00:04:58.494 "read": true, 00:04:58.494 "reset": true, 00:04:58.494 "seek_data": false, 00:04:58.494 "seek_hole": false, 00:04:58.494 "unmap": true, 00:04:58.494 "write": true, 00:04:58.494 "write_zeroes": true, 00:04:58.494 "zcopy": true, 00:04:58.494 "zone_append": false, 00:04:58.494 "zone_management": false 00:04:58.494 }, 00:04:58.494 "uuid": "04c159ba-cf4d-4051-9427-37e53569b66e", 00:04:58.494 "zoned": false 00:04:58.494 } 00:04:58.494 ]' 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.494 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.753 11:25:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.753 00:04:58.753 real 0m0.158s 00:04:58.753 user 0m0.101s 00:04:58.753 sys 0m0.016s 00:04:58.753 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.753 ************************************ 00:04:58.753 11:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.753 END TEST rpc_plugins 00:04:58.753 ************************************ 00:04:58.753 11:25:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.753 11:25:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.753 11:25:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.753 11:25:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.753 ************************************ 00:04:58.753 START TEST rpc_trace_cmd_test 00:04:58.753 ************************************ 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:58.753 "bdev": { 00:04:58.753 "mask": "0x8", 00:04:58.753 "tpoint_mask": "0xffffffffffffffff" 00:04:58.753 }, 00:04:58.753 "bdev_nvme": { 00:04:58.753 "mask": "0x4000", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "bdev_raid": { 00:04:58.753 "mask": "0x20000", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "blob": { 00:04:58.753 "mask": "0x10000", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "blobfs": { 00:04:58.753 "mask": "0x80", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "dsa": { 00:04:58.753 "mask": "0x200", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "ftl": { 00:04:58.753 "mask": "0x40", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "iaa": { 00:04:58.753 "mask": "0x1000", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "iscsi_conn": { 00:04:58.753 "mask": "0x2", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "nvme_pcie": { 00:04:58.753 "mask": "0x800", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "nvme_tcp": { 00:04:58.753 "mask": "0x2000", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "nvmf_rdma": { 00:04:58.753 "mask": "0x10", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "nvmf_tcp": { 00:04:58.753 "mask": "0x20", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "scheduler": { 00:04:58.753 "mask": "0x40000", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "scsi": { 00:04:58.753 "mask": "0x4", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "sock": { 00:04:58.753 "mask": "0x8000", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "thread": { 00:04:58.753 "mask": "0x400", 00:04:58.753 "tpoint_mask": "0x0" 00:04:58.753 }, 00:04:58.753 "tpoint_group_mask": "0x8", 00:04:58.753 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60317" 00:04:58.753 }' 00:04:58.753 11:25:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:58.753 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:58.753 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.753 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.753 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.753 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.753 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.012 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.012 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.012 11:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.012 00:04:59.012 real 0m0.270s 00:04:59.012 user 0m0.232s 00:04:59.012 sys 0m0.026s 00:04:59.012 11:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.012 ************************************ 00:04:59.012 END TEST rpc_trace_cmd_test 00:04:59.012 11:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.012 ************************************ 00:04:59.012 11:25:04 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:59.012 11:25:04 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:59.012 11:25:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.012 11:25:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.012 11:25:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.012 ************************************ 00:04:59.012 START TEST go_rpc 00:04:59.012 ************************************ 00:04:59.012 11:25:04 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.012 11:25:04 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.012 11:25:04 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.012 11:25:04 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["7b520292-5a7a-46af-ac5d-689247ec4e7f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"7b520292-5a7a-46af-ac5d-689247ec4e7f","zoned":false}]' 00:04:59.012 11:25:04 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:59.270 11:25:04 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:59.271 11:25:04 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.271 11:25:04 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.271 11:25:04 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.271 11:25:04 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.271 11:25:04 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:59.271 11:25:04 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:59.271 11:25:04 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:59.271 11:25:04 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:59.271 00:04:59.271 real 0m0.216s 00:04:59.271 user 0m0.144s 00:04:59.271 sys 0m0.039s 00:04:59.271 11:25:04 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.271 ************************************ 00:04:59.271 END TEST go_rpc 00:04:59.271 11:25:04 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.271 ************************************ 00:04:59.271 11:25:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.271 11:25:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.271 11:25:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.271 11:25:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.271 11:25:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.271 ************************************ 00:04:59.271 START TEST rpc_daemon_integrity 00:04:59.271 ************************************ 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.271 { 00:04:59.271 "aliases": [ 00:04:59.271 "21380b3c-955c-4b36-8a67-ea0dd5824cdc" 00:04:59.271 ], 00:04:59.271 "assigned_rate_limits": { 00:04:59.271 "r_mbytes_per_sec": 0, 00:04:59.271 "rw_ios_per_sec": 0, 00:04:59.271 "rw_mbytes_per_sec": 0, 00:04:59.271 "w_mbytes_per_sec": 0 00:04:59.271 }, 00:04:59.271 "block_size": 512, 00:04:59.271 "claimed": false, 00:04:59.271 "driver_specific": {}, 00:04:59.271 "memory_domains": [ 00:04:59.271 { 00:04:59.271 "dma_device_id": "system", 00:04:59.271 "dma_device_type": 1 00:04:59.271 }, 00:04:59.271 { 00:04:59.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.271 "dma_device_type": 2 00:04:59.271 } 00:04:59.271 ], 00:04:59.271 "name": "Malloc3", 00:04:59.271 "num_blocks": 16384, 00:04:59.271 "product_name": "Malloc disk", 00:04:59.271 "supported_io_types": { 00:04:59.271 "abort": true, 00:04:59.271 "compare": false, 00:04:59.271 "compare_and_write": false, 00:04:59.271 "copy": true, 00:04:59.271 "flush": true, 00:04:59.271 "get_zone_info": false, 00:04:59.271 "nvme_admin": false, 00:04:59.271 "nvme_io": false, 00:04:59.271 "nvme_io_md": false, 00:04:59.271 "nvme_iov_md": false, 00:04:59.271 "read": true, 00:04:59.271 "reset": true, 00:04:59.271 "seek_data": false, 00:04:59.271 "seek_hole": false, 00:04:59.271 "unmap": true, 00:04:59.271 "write": true, 00:04:59.271 "write_zeroes": true, 00:04:59.271 "zcopy": true, 00:04:59.271 "zone_append": false, 00:04:59.271 "zone_management": false 00:04:59.271 }, 00:04:59.271 "uuid": "21380b3c-955c-4b36-8a67-ea0dd5824cdc", 00:04:59.271 "zoned": false 00:04:59.271 } 00:04:59.271 ]' 00:04:59.271 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.529 [2024-12-16 11:25:04.712417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:59.529 [2024-12-16 11:25:04.712541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.529 [2024-12-16 11:25:04.712570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa38be0 00:04:59.529 [2024-12-16 11:25:04.712582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.529 [2024-12-16 11:25:04.714698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.529 [2024-12-16 11:25:04.714744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.529 Passthru0 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.529 { 00:04:59.529 "aliases": [ 00:04:59.529 "21380b3c-955c-4b36-8a67-ea0dd5824cdc" 00:04:59.529 ], 00:04:59.529 "assigned_rate_limits": { 00:04:59.529 "r_mbytes_per_sec": 0, 00:04:59.529 "rw_ios_per_sec": 0, 00:04:59.529 "rw_mbytes_per_sec": 0, 00:04:59.529 "w_mbytes_per_sec": 0 00:04:59.529 }, 00:04:59.529 "block_size": 512, 00:04:59.529 "claim_type": "exclusive_write", 00:04:59.529 "claimed": true, 00:04:59.529 "driver_specific": {}, 00:04:59.529 "memory_domains": [ 00:04:59.529 { 00:04:59.529 "dma_device_id": "system", 00:04:59.529 "dma_device_type": 1 00:04:59.529 }, 00:04:59.529 { 00:04:59.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.529 "dma_device_type": 2 00:04:59.529 } 00:04:59.529 ], 00:04:59.529 "name": "Malloc3", 00:04:59.529 "num_blocks": 16384, 00:04:59.529 "product_name": "Malloc disk", 00:04:59.529 "supported_io_types": { 00:04:59.529 "abort": true, 00:04:59.529 "compare": false, 00:04:59.529 "compare_and_write": false, 00:04:59.529 "copy": true, 00:04:59.529 "flush": true, 00:04:59.529 "get_zone_info": false, 00:04:59.529 "nvme_admin": false, 00:04:59.529 "nvme_io": false, 00:04:59.529 "nvme_io_md": false, 00:04:59.529 "nvme_iov_md": false, 00:04:59.529 "read": true, 00:04:59.529 "reset": true, 00:04:59.529 "seek_data": false, 00:04:59.529 "seek_hole": false, 00:04:59.529 "unmap": true, 00:04:59.529 "write": true, 00:04:59.529 "write_zeroes": true, 00:04:59.529 "zcopy": true, 00:04:59.529 "zone_append": false, 00:04:59.529 "zone_management": false 00:04:59.529 }, 00:04:59.529 "uuid": "21380b3c-955c-4b36-8a67-ea0dd5824cdc", 00:04:59.529 "zoned": false 00:04:59.529 }, 00:04:59.529 { 00:04:59.529 "aliases": [ 00:04:59.529 "7ebac42b-dfab-5b38-adc1-148a0ecef440" 00:04:59.529 ], 00:04:59.529 "assigned_rate_limits": { 00:04:59.529 "r_mbytes_per_sec": 0, 00:04:59.529 "rw_ios_per_sec": 0, 00:04:59.529 "rw_mbytes_per_sec": 0, 00:04:59.529 "w_mbytes_per_sec": 0 00:04:59.529 }, 00:04:59.529 "block_size": 512, 00:04:59.529 "claimed": false, 00:04:59.529 "driver_specific": { 00:04:59.529 "passthru": { 00:04:59.529 "base_bdev_name": "Malloc3", 00:04:59.529 "name": "Passthru0" 00:04:59.529 } 00:04:59.529 }, 00:04:59.529 "memory_domains": [ 00:04:59.529 { 00:04:59.529 "dma_device_id": "system", 00:04:59.529 "dma_device_type": 1 00:04:59.529 }, 00:04:59.529 { 00:04:59.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.529 "dma_device_type": 2 00:04:59.529 } 00:04:59.529 ], 00:04:59.529 "name": "Passthru0", 00:04:59.529 "num_blocks": 16384, 00:04:59.529 "product_name": "passthru", 00:04:59.529 "supported_io_types": { 00:04:59.529 "abort": true, 00:04:59.529 "compare": false, 00:04:59.529 "compare_and_write": false, 00:04:59.529 "copy": true, 00:04:59.529 "flush": true, 00:04:59.529 "get_zone_info": false, 00:04:59.529 "nvme_admin": false, 00:04:59.529 "nvme_io": false, 00:04:59.529 "nvme_io_md": false, 00:04:59.529 "nvme_iov_md": false, 00:04:59.529 "read": true, 00:04:59.529 "reset": true, 00:04:59.529 "seek_data": false, 00:04:59.529 "seek_hole": false, 00:04:59.529 "unmap": true, 00:04:59.529 "write": true, 00:04:59.529 "write_zeroes": true, 00:04:59.529 "zcopy": true, 00:04:59.529 "zone_append": false, 00:04:59.529 "zone_management": false 00:04:59.529 }, 00:04:59.529 "uuid": "7ebac42b-dfab-5b38-adc1-148a0ecef440", 00:04:59.529 "zoned": false 00:04:59.529 } 00:04:59.529 ]' 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.529 00:04:59.529 real 0m0.321s 00:04:59.529 user 0m0.203s 00:04:59.529 sys 0m0.048s 00:04:59.529 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.530 11:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.530 ************************************ 00:04:59.530 END TEST rpc_daemon_integrity 00:04:59.530 ************************************ 00:04:59.530 11:25:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.530 11:25:04 rpc -- rpc/rpc.sh@84 -- # killprocess 60317 00:04:59.530 11:25:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 60317 ']' 00:04:59.530 11:25:04 rpc -- common/autotest_common.sh@958 -- # kill -0 60317 00:04:59.530 11:25:04 rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.530 11:25:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.530 11:25:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60317 00:04:59.787 11:25:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.787 killing process with pid 60317 00:04:59.787 11:25:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.787 11:25:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60317' 00:04:59.787 11:25:04 rpc -- common/autotest_common.sh@973 -- # kill 60317 00:04:59.787 11:25:04 rpc -- common/autotest_common.sh@978 -- # wait 60317 00:05:00.045 00:05:00.045 real 0m3.260s 00:05:00.045 user 0m4.207s 00:05:00.045 sys 0m0.813s 00:05:00.045 11:25:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.045 11:25:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.045 ************************************ 00:05:00.045 END TEST rpc 00:05:00.045 ************************************ 00:05:00.045 11:25:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.045 11:25:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.045 11:25:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.045 11:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:00.045 ************************************ 00:05:00.045 START TEST skip_rpc 00:05:00.045 ************************************ 00:05:00.045 11:25:05 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.303 * Looking for test storage... 00:05:00.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.303 11:25:05 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.303 11:25:05 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.303 11:25:05 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.303 11:25:05 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.303 11:25:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.304 11:25:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.304 11:25:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.304 11:25:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.304 --rc genhtml_branch_coverage=1 00:05:00.304 --rc genhtml_function_coverage=1 00:05:00.304 --rc genhtml_legend=1 00:05:00.304 --rc geninfo_all_blocks=1 00:05:00.304 --rc geninfo_unexecuted_blocks=1 00:05:00.304 00:05:00.304 ' 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.304 --rc genhtml_branch_coverage=1 00:05:00.304 --rc genhtml_function_coverage=1 00:05:00.304 --rc genhtml_legend=1 00:05:00.304 --rc geninfo_all_blocks=1 00:05:00.304 --rc geninfo_unexecuted_blocks=1 00:05:00.304 00:05:00.304 ' 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.304 --rc genhtml_branch_coverage=1 00:05:00.304 --rc genhtml_function_coverage=1 00:05:00.304 --rc genhtml_legend=1 00:05:00.304 --rc geninfo_all_blocks=1 00:05:00.304 --rc geninfo_unexecuted_blocks=1 00:05:00.304 00:05:00.304 ' 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.304 --rc genhtml_branch_coverage=1 00:05:00.304 --rc genhtml_function_coverage=1 00:05:00.304 --rc genhtml_legend=1 00:05:00.304 --rc geninfo_all_blocks=1 00:05:00.304 --rc geninfo_unexecuted_blocks=1 00:05:00.304 00:05:00.304 ' 00:05:00.304 11:25:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.304 11:25:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.304 11:25:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.304 11:25:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.304 ************************************ 00:05:00.304 START TEST skip_rpc 00:05:00.304 ************************************ 00:05:00.304 11:25:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:00.304 11:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60586 00:05:00.304 11:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.304 11:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.304 11:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.304 [2024-12-16 11:25:05.706077] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:00.304 [2024-12-16 11:25:05.706199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 00:05:00.562 [2024-12-16 11:25:05.858913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.562 [2024-12-16 11:25:05.932970] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.896 2024/12/16 11:25:10 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60586 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 60586 ']' 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 60586 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60586 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60586' 00:05:05.896 killing process with pid 60586 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 60586 00:05:05.896 11:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 60586 00:05:05.896 00:05:05.896 real 0m5.443s 00:05:05.896 user 0m5.030s 00:05:05.896 sys 0m0.315s 00:05:05.896 11:25:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.896 11:25:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.896 ************************************ 00:05:05.896 END TEST skip_rpc 00:05:05.896 ************************************ 00:05:05.896 11:25:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.896 11:25:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.896 11:25:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.896 11:25:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.896 ************************************ 00:05:05.896 START TEST skip_rpc_with_json 00:05:05.896 ************************************ 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60679 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60679 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 60679 ']' 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.896 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.896 [2024-12-16 11:25:11.222627] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:05.896 [2024-12-16 11:25:11.222768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60679 ] 00:05:06.153 [2024-12-16 11:25:11.371135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.153 [2024-12-16 11:25:11.436029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.410 [2024-12-16 11:25:11.730876] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:06.410 2024/12/16 11:25:11 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:06.410 request: 00:05:06.410 { 00:05:06.410 "method": "nvmf_get_transports", 00:05:06.410 "params": { 00:05:06.410 "trtype": "tcp" 00:05:06.410 } 00:05:06.410 } 00:05:06.410 Got JSON-RPC error response 00:05:06.410 GoRPCClient: error on JSON-RPC call 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.410 [2024-12-16 11:25:11.742983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.410 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.668 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.668 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.668 { 00:05:06.668 "subsystems": [ 00:05:06.668 { 00:05:06.668 "subsystem": "fsdev", 00:05:06.668 "config": [ 00:05:06.668 { 00:05:06.668 "method": "fsdev_set_opts", 00:05:06.668 "params": { 00:05:06.668 "fsdev_io_cache_size": 256, 00:05:06.668 "fsdev_io_pool_size": 65535 00:05:06.668 } 00:05:06.668 } 00:05:06.668 ] 00:05:06.668 }, 00:05:06.668 { 00:05:06.668 "subsystem": "keyring", 00:05:06.668 "config": [] 00:05:06.668 }, 00:05:06.668 { 00:05:06.668 "subsystem": "iobuf", 00:05:06.668 "config": [ 00:05:06.668 { 00:05:06.668 "method": "iobuf_set_options", 00:05:06.668 "params": { 00:05:06.668 "enable_numa": false, 00:05:06.668 "large_bufsize": 135168, 00:05:06.668 "large_pool_count": 1024, 00:05:06.668 "small_bufsize": 8192, 00:05:06.668 "small_pool_count": 8192 00:05:06.668 } 00:05:06.668 } 00:05:06.668 ] 00:05:06.668 }, 00:05:06.668 { 00:05:06.668 "subsystem": "sock", 00:05:06.668 "config": [ 00:05:06.668 { 00:05:06.668 "method": "sock_set_default_impl", 00:05:06.668 "params": { 00:05:06.668 "impl_name": "posix" 00:05:06.668 } 00:05:06.668 }, 00:05:06.668 { 00:05:06.668 "method": "sock_impl_set_options", 00:05:06.668 "params": { 00:05:06.668 "enable_ktls": false, 00:05:06.668 "enable_placement_id": 0, 00:05:06.668 "enable_quickack": false, 00:05:06.668 "enable_recv_pipe": true, 00:05:06.668 "enable_zerocopy_send_client": false, 00:05:06.668 "enable_zerocopy_send_server": true, 00:05:06.668 "impl_name": "ssl", 00:05:06.668 "recv_buf_size": 4096, 00:05:06.668 "send_buf_size": 4096, 00:05:06.668 "tls_version": 0, 00:05:06.668 "zerocopy_threshold": 0 00:05:06.668 } 00:05:06.668 }, 00:05:06.668 { 00:05:06.668 "method": "sock_impl_set_options", 00:05:06.668 "params": { 00:05:06.668 "enable_ktls": false, 00:05:06.668 "enable_placement_id": 0, 00:05:06.668 "enable_quickack": false, 00:05:06.668 "enable_recv_pipe": true, 00:05:06.668 "enable_zerocopy_send_client": false, 00:05:06.668 "enable_zerocopy_send_server": true, 00:05:06.668 "impl_name": "posix", 00:05:06.668 "recv_buf_size": 2097152, 00:05:06.668 "send_buf_size": 2097152, 00:05:06.668 "tls_version": 0, 00:05:06.668 "zerocopy_threshold": 0 00:05:06.668 } 00:05:06.668 } 00:05:06.668 ] 00:05:06.668 }, 00:05:06.668 { 00:05:06.668 "subsystem": "vmd", 00:05:06.668 "config": [] 00:05:06.668 }, 00:05:06.668 { 00:05:06.668 "subsystem": "accel", 00:05:06.668 "config": [ 00:05:06.668 { 00:05:06.668 "method": "accel_set_options", 00:05:06.668 "params": { 00:05:06.668 "buf_count": 2048, 00:05:06.669 "large_cache_size": 16, 00:05:06.669 "sequence_count": 2048, 00:05:06.669 "small_cache_size": 128, 00:05:06.669 "task_count": 2048 00:05:06.669 } 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "bdev", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "bdev_set_options", 00:05:06.669 "params": { 00:05:06.669 "bdev_auto_examine": true, 00:05:06.669 "bdev_io_cache_size": 256, 00:05:06.669 "bdev_io_pool_size": 65535, 00:05:06.669 "iobuf_large_cache_size": 16, 00:05:06.669 "iobuf_small_cache_size": 128 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_raid_set_options", 00:05:06.669 "params": { 00:05:06.669 "process_max_bandwidth_mb_sec": 0, 00:05:06.669 "process_window_size_kb": 1024 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_iscsi_set_options", 00:05:06.669 "params": { 00:05:06.669 "timeout_sec": 30 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_nvme_set_options", 00:05:06.669 "params": { 00:05:06.669 "action_on_timeout": "none", 00:05:06.669 "allow_accel_sequence": false, 00:05:06.669 "arbitration_burst": 0, 00:05:06.669 "bdev_retry_count": 3, 00:05:06.669 "ctrlr_loss_timeout_sec": 0, 00:05:06.669 "delay_cmd_submit": true, 00:05:06.669 "dhchap_dhgroups": [ 00:05:06.669 "null", 00:05:06.669 "ffdhe2048", 00:05:06.669 "ffdhe3072", 00:05:06.669 "ffdhe4096", 00:05:06.669 "ffdhe6144", 00:05:06.669 "ffdhe8192" 00:05:06.669 ], 00:05:06.669 "dhchap_digests": [ 00:05:06.669 "sha256", 00:05:06.669 "sha384", 00:05:06.669 "sha512" 00:05:06.669 ], 00:05:06.669 "disable_auto_failback": false, 00:05:06.669 "fast_io_fail_timeout_sec": 0, 00:05:06.669 "generate_uuids": false, 00:05:06.669 "high_priority_weight": 0, 00:05:06.669 "io_path_stat": false, 00:05:06.669 "io_queue_requests": 0, 00:05:06.669 "keep_alive_timeout_ms": 10000, 00:05:06.669 "low_priority_weight": 0, 00:05:06.669 "medium_priority_weight": 0, 00:05:06.669 "nvme_adminq_poll_period_us": 10000, 00:05:06.669 "nvme_error_stat": false, 00:05:06.669 "nvme_ioq_poll_period_us": 0, 00:05:06.669 "rdma_cm_event_timeout_ms": 0, 00:05:06.669 "rdma_max_cq_size": 0, 00:05:06.669 "rdma_srq_size": 0, 00:05:06.669 "rdma_umr_per_io": false, 00:05:06.669 "reconnect_delay_sec": 0, 00:05:06.669 "timeout_admin_us": 0, 00:05:06.669 "timeout_us": 0, 00:05:06.669 "transport_ack_timeout": 0, 00:05:06.669 "transport_retry_count": 4, 00:05:06.669 "transport_tos": 0 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_nvme_set_hotplug", 00:05:06.669 "params": { 00:05:06.669 "enable": false, 00:05:06.669 "period_us": 100000 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "bdev_wait_for_examine" 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "scsi", 00:05:06.669 "config": null 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "scheduler", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "framework_set_scheduler", 00:05:06.669 "params": { 00:05:06.669 "name": "static" 00:05:06.669 } 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "vhost_scsi", 00:05:06.669 "config": [] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "vhost_blk", 00:05:06.669 "config": [] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "ublk", 00:05:06.669 "config": [] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "nbd", 00:05:06.669 "config": [] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "nvmf", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "nvmf_set_config", 00:05:06.669 "params": { 00:05:06.669 "admin_cmd_passthru": { 00:05:06.669 "identify_ctrlr": false 00:05:06.669 }, 00:05:06.669 "dhchap_dhgroups": [ 00:05:06.669 "null", 00:05:06.669 "ffdhe2048", 00:05:06.669 "ffdhe3072", 00:05:06.669 "ffdhe4096", 00:05:06.669 "ffdhe6144", 00:05:06.669 "ffdhe8192" 00:05:06.669 ], 00:05:06.669 "dhchap_digests": [ 00:05:06.669 "sha256", 00:05:06.669 "sha384", 00:05:06.669 "sha512" 00:05:06.669 ], 00:05:06.669 "discovery_filter": "match_any" 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "nvmf_set_max_subsystems", 00:05:06.669 "params": { 00:05:06.669 "max_subsystems": 1024 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "nvmf_set_crdt", 00:05:06.669 "params": { 00:05:06.669 "crdt1": 0, 00:05:06.669 "crdt2": 0, 00:05:06.669 "crdt3": 0 00:05:06.669 } 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "method": "nvmf_create_transport", 00:05:06.669 "params": { 00:05:06.669 "abort_timeout_sec": 1, 00:05:06.669 "ack_timeout": 0, 00:05:06.669 "buf_cache_size": 4294967295, 00:05:06.669 "c2h_success": true, 00:05:06.669 "data_wr_pool_size": 0, 00:05:06.669 "dif_insert_or_strip": false, 00:05:06.669 "in_capsule_data_size": 4096, 00:05:06.669 "io_unit_size": 131072, 00:05:06.669 "max_aq_depth": 128, 00:05:06.669 "max_io_qpairs_per_ctrlr": 127, 00:05:06.669 "max_io_size": 131072, 00:05:06.669 "max_queue_depth": 128, 00:05:06.669 "num_shared_buffers": 511, 00:05:06.669 "sock_priority": 0, 00:05:06.669 "trtype": "TCP", 00:05:06.669 "zcopy": false 00:05:06.669 } 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 }, 00:05:06.669 { 00:05:06.669 "subsystem": "iscsi", 00:05:06.669 "config": [ 00:05:06.669 { 00:05:06.669 "method": "iscsi_set_options", 00:05:06.669 "params": { 00:05:06.669 "allow_duplicated_isid": false, 00:05:06.669 "chap_group": 0, 00:05:06.669 "data_out_pool_size": 2048, 00:05:06.669 "default_time2retain": 20, 00:05:06.669 "default_time2wait": 2, 00:05:06.669 "disable_chap": false, 00:05:06.669 "error_recovery_level": 0, 00:05:06.669 "first_burst_length": 8192, 00:05:06.669 "immediate_data": true, 00:05:06.669 "immediate_data_pool_size": 16384, 00:05:06.669 "max_connections_per_session": 2, 00:05:06.669 "max_large_datain_per_connection": 64, 00:05:06.669 "max_queue_depth": 64, 00:05:06.669 "max_r2t_per_connection": 4, 00:05:06.669 "max_sessions": 128, 00:05:06.669 "mutual_chap": false, 00:05:06.669 "node_base": "iqn.2016-06.io.spdk", 00:05:06.669 "nop_in_interval": 30, 00:05:06.669 "nop_timeout": 60, 00:05:06.669 "pdu_pool_size": 36864, 00:05:06.669 "require_chap": false 00:05:06.669 } 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 } 00:05:06.669 ] 00:05:06.669 } 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60679 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 60679 ']' 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 60679 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60679 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.669 killing process with pid 60679 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60679' 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 60679 00:05:06.669 11:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 60679 00:05:06.983 11:25:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60705 00:05:06.983 11:25:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.983 11:25:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60705 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 60705 ']' 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 60705 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60705 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60705' 00:05:12.246 killing process with pid 60705 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 60705 00:05:12.246 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 60705 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:12.504 00:05:12.504 real 0m6.642s 00:05:12.504 user 0m6.202s 00:05:12.504 sys 0m0.651s 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.504 ************************************ 00:05:12.504 END TEST skip_rpc_with_json 00:05:12.504 ************************************ 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.504 11:25:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:12.504 11:25:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.504 11:25:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.504 11:25:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.504 ************************************ 00:05:12.504 START TEST skip_rpc_with_delay 00:05:12.504 ************************************ 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.504 [2024-12-16 11:25:17.890757] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.504 00:05:12.504 real 0m0.097s 00:05:12.504 user 0m0.066s 00:05:12.504 sys 0m0.029s 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.504 ************************************ 00:05:12.504 END TEST skip_rpc_with_delay 00:05:12.504 ************************************ 00:05:12.504 11:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:12.762 11:25:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:12.762 11:25:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:12.762 11:25:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:12.762 11:25:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.762 11:25:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.762 11:25:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.762 ************************************ 00:05:12.762 START TEST exit_on_failed_rpc_init 00:05:12.762 ************************************ 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60814 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60814 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 60814 ']' 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.762 11:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.762 [2024-12-16 11:25:18.035702] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:12.762 [2024-12-16 11:25:18.035825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60814 ] 00:05:13.019 [2024-12-16 11:25:18.188134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.019 [2024-12-16 11:25:18.256085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:13.953 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.953 [2024-12-16 11:25:19.178551] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:13.953 [2024-12-16 11:25:19.178670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60844 ] 00:05:13.953 [2024-12-16 11:25:19.330392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.211 [2024-12-16 11:25:19.400927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.211 [2024-12-16 11:25:19.401083] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:14.211 [2024-12-16 11:25:19.401112] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:14.211 [2024-12-16 11:25:19.401128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60814 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 60814 ']' 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 60814 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60814 00:05:14.211 killing process with pid 60814 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60814' 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 60814 00:05:14.211 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 60814 00:05:14.777 00:05:14.777 real 0m1.929s 00:05:14.777 user 0m2.316s 00:05:14.777 sys 0m0.425s 00:05:14.777 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.777 11:25:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.777 ************************************ 00:05:14.777 END TEST exit_on_failed_rpc_init 00:05:14.777 ************************************ 00:05:14.777 11:25:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.777 ************************************ 00:05:14.777 END TEST skip_rpc 00:05:14.777 ************************************ 00:05:14.777 00:05:14.777 real 0m14.512s 00:05:14.777 user 0m13.816s 00:05:14.777 sys 0m1.617s 00:05:14.777 11:25:19 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.777 11:25:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.777 11:25:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.777 11:25:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.777 11:25:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.777 11:25:19 -- common/autotest_common.sh@10 -- # set +x 00:05:14.777 ************************************ 00:05:14.777 START TEST rpc_client 00:05:14.777 ************************************ 00:05:14.777 11:25:19 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.777 * Looking for test storage... 00:05:14.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.777 11:25:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.777 --rc genhtml_branch_coverage=1 00:05:14.777 --rc genhtml_function_coverage=1 00:05:14.777 --rc genhtml_legend=1 00:05:14.777 --rc geninfo_all_blocks=1 00:05:14.777 --rc geninfo_unexecuted_blocks=1 00:05:14.777 00:05:14.777 ' 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.777 --rc genhtml_branch_coverage=1 00:05:14.777 --rc genhtml_function_coverage=1 00:05:14.777 --rc genhtml_legend=1 00:05:14.777 --rc geninfo_all_blocks=1 00:05:14.777 --rc geninfo_unexecuted_blocks=1 00:05:14.777 00:05:14.777 ' 00:05:14.777 11:25:20 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.777 --rc genhtml_branch_coverage=1 00:05:14.778 --rc genhtml_function_coverage=1 00:05:14.778 --rc genhtml_legend=1 00:05:14.778 --rc geninfo_all_blocks=1 00:05:14.778 --rc geninfo_unexecuted_blocks=1 00:05:14.778 00:05:14.778 ' 00:05:14.778 11:25:20 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.778 --rc genhtml_branch_coverage=1 00:05:14.778 --rc genhtml_function_coverage=1 00:05:14.778 --rc genhtml_legend=1 00:05:14.778 --rc geninfo_all_blocks=1 00:05:14.778 --rc geninfo_unexecuted_blocks=1 00:05:14.778 00:05:14.778 ' 00:05:14.778 11:25:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:15.035 OK 00:05:15.035 11:25:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.035 00:05:15.035 real 0m0.215s 00:05:15.035 user 0m0.142s 00:05:15.035 sys 0m0.081s 00:05:15.035 11:25:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.035 11:25:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 ************************************ 00:05:15.035 END TEST rpc_client 00:05:15.035 ************************************ 00:05:15.035 11:25:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:15.035 11:25:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.035 11:25:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.036 11:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:15.036 ************************************ 00:05:15.036 START TEST json_config 00:05:15.036 ************************************ 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.036 11:25:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.036 11:25:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.036 11:25:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.036 11:25:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.036 11:25:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.036 11:25:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.036 11:25:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.036 11:25:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:15.036 11:25:20 json_config -- scripts/common.sh@345 -- # : 1 00:05:15.036 11:25:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.036 11:25:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.036 11:25:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:15.036 11:25:20 json_config -- scripts/common.sh@353 -- # local d=1 00:05:15.036 11:25:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.036 11:25:20 json_config -- scripts/common.sh@355 -- # echo 1 00:05:15.036 11:25:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.036 11:25:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@353 -- # local d=2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.036 11:25:20 json_config -- scripts/common.sh@355 -- # echo 2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.036 11:25:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.036 11:25:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.036 11:25:20 json_config -- scripts/common.sh@368 -- # return 0 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.036 --rc genhtml_branch_coverage=1 00:05:15.036 --rc genhtml_function_coverage=1 00:05:15.036 --rc genhtml_legend=1 00:05:15.036 --rc geninfo_all_blocks=1 00:05:15.036 --rc geninfo_unexecuted_blocks=1 00:05:15.036 00:05:15.036 ' 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.036 --rc genhtml_branch_coverage=1 00:05:15.036 --rc genhtml_function_coverage=1 00:05:15.036 --rc genhtml_legend=1 00:05:15.036 --rc geninfo_all_blocks=1 00:05:15.036 --rc geninfo_unexecuted_blocks=1 00:05:15.036 00:05:15.036 ' 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.036 --rc genhtml_branch_coverage=1 00:05:15.036 --rc genhtml_function_coverage=1 00:05:15.036 --rc genhtml_legend=1 00:05:15.036 --rc geninfo_all_blocks=1 00:05:15.036 --rc geninfo_unexecuted_blocks=1 00:05:15.036 00:05:15.036 ' 00:05:15.036 11:25:20 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.036 --rc genhtml_branch_coverage=1 00:05:15.036 --rc genhtml_function_coverage=1 00:05:15.036 --rc genhtml_legend=1 00:05:15.036 --rc geninfo_all_blocks=1 00:05:15.036 --rc geninfo_unexecuted_blocks=1 00:05:15.036 00:05:15.036 ' 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:15.036 11:25:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.036 11:25:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.036 11:25:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.036 11:25:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.036 11:25:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.036 11:25:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.036 11:25:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.036 11:25:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:15.036 11:25:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@51 -- # : 0 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.036 11:25:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.036 11:25:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:15.037 11:25:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:15.037 11:25:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:15.037 11:25:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:15.037 11:25:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.037 11:25:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:15.037 INFO: JSON configuration test init 00:05:15.037 11:25:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:15.037 11:25:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:15.037 11:25:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.037 11:25:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.294 11:25:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.294 11:25:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.294 11:25:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.294 11:25:20 json_config -- json_config/common.sh@10 -- # shift 00:05:15.294 11:25:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.294 11:25:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.294 11:25:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.294 11:25:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.294 11:25:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.294 11:25:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60984 00:05:15.294 11:25:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.294 Waiting for target to run... 00:05:15.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.294 11:25:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.294 11:25:20 json_config -- json_config/common.sh@25 -- # waitforlisten 60984 /var/tmp/spdk_tgt.sock 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 60984 ']' 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.294 11:25:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.294 [2024-12-16 11:25:20.529037] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:15.294 [2024-12-16 11:25:20.529437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60984 ] 00:05:15.859 [2024-12-16 11:25:20.990954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.859 [2024-12-16 11:25:21.048648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.425 11:25:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.425 00:05:16.425 11:25:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.425 11:25:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.425 11:25:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:16.425 11:25:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:16.425 11:25:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.425 11:25:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 11:25:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:16.425 11:25:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:16.425 11:25:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.425 11:25:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 11:25:21 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.425 11:25:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:16.425 11:25:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:16.991 11:25:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.991 11:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:16.991 11:25:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:16.991 11:25:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@54 -- # sort 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:17.250 11:25:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.250 11:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:17.250 11:25:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.250 11:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:17.250 11:25:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.250 11:25:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.508 MallocForNvmf0 00:05:17.508 11:25:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.508 11:25:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.764 MallocForNvmf1 00:05:17.764 11:25:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.764 11:25:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.329 [2024-12-16 11:25:23.474439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.329 11:25:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.329 11:25:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.586 11:25:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.586 11:25:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.844 11:25:24 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.844 11:25:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.102 11:25:24 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.102 11:25:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.360 [2024-12-16 11:25:24.587126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.360 11:25:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:19.360 11:25:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.360 11:25:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.360 11:25:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:19.360 11:25:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.360 11:25:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.360 11:25:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:19.360 11:25:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.360 11:25:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.618 MallocBdevForConfigChangeCheck 00:05:19.618 11:25:25 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:19.618 11:25:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.618 11:25:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.875 11:25:25 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:19.875 11:25:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.133 INFO: shutting down applications... 00:05:20.133 11:25:25 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:20.133 11:25:25 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:20.133 11:25:25 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:20.133 11:25:25 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:20.133 11:25:25 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:20.699 Calling clear_iscsi_subsystem 00:05:20.699 Calling clear_nvmf_subsystem 00:05:20.699 Calling clear_nbd_subsystem 00:05:20.699 Calling clear_ublk_subsystem 00:05:20.699 Calling clear_vhost_blk_subsystem 00:05:20.699 Calling clear_vhost_scsi_subsystem 00:05:20.699 Calling clear_bdev_subsystem 00:05:20.699 11:25:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:20.699 11:25:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:20.699 11:25:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:20.699 11:25:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.699 11:25:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:20.699 11:25:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.264 11:25:26 json_config -- json_config/json_config.sh@352 -- # break 00:05:21.264 11:25:26 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:21.264 11:25:26 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:21.264 11:25:26 json_config -- json_config/common.sh@31 -- # local app=target 00:05:21.264 11:25:26 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.264 11:25:26 json_config -- json_config/common.sh@35 -- # [[ -n 60984 ]] 00:05:21.264 11:25:26 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60984 00:05:21.264 11:25:26 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.264 11:25:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.265 11:25:26 json_config -- json_config/common.sh@41 -- # kill -0 60984 00:05:21.265 11:25:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.523 11:25:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.523 11:25:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.523 11:25:26 json_config -- json_config/common.sh@41 -- # kill -0 60984 00:05:21.523 11:25:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.523 11:25:26 json_config -- json_config/common.sh@43 -- # break 00:05:21.523 11:25:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.523 SPDK target shutdown done 00:05:21.523 11:25:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.523 INFO: relaunching applications... 00:05:21.523 11:25:26 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:21.523 11:25:26 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.523 11:25:26 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.523 11:25:26 json_config -- json_config/common.sh@10 -- # shift 00:05:21.523 11:25:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.523 11:25:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.523 11:25:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.523 11:25:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.523 11:25:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.523 11:25:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61269 00:05:21.523 11:25:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.523 Waiting for target to run... 00:05:21.523 11:25:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.523 11:25:26 json_config -- json_config/common.sh@25 -- # waitforlisten 61269 /var/tmp/spdk_tgt.sock 00:05:21.523 11:25:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 61269 ']' 00:05:21.523 11:25:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.523 11:25:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.523 11:25:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.523 11:25:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.523 11:25:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.781 [2024-12-16 11:25:26.966323] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:21.782 [2024-12-16 11:25:26.966462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61269 ] 00:05:22.040 [2024-12-16 11:25:27.418835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.298 [2024-12-16 11:25:27.468047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.556 [2024-12-16 11:25:27.817957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.556 [2024-12-16 11:25:27.850113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.814 11:25:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.814 11:25:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:22.814 00:05:22.814 11:25:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.814 11:25:28 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:22.814 INFO: Checking if target configuration is the same... 00:05:22.814 11:25:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.814 11:25:28 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.814 11:25:28 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:22.814 11:25:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.814 + '[' 2 -ne 2 ']' 00:05:22.814 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:22.814 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:22.814 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:22.814 +++ basename /dev/fd/62 00:05:22.814 ++ mktemp /tmp/62.XXX 00:05:22.814 + tmp_file_1=/tmp/62.DA3 00:05:22.814 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.814 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.814 + tmp_file_2=/tmp/spdk_tgt_config.json.CkZ 00:05:22.814 + ret=0 00:05:22.814 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.074 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.332 + diff -u /tmp/62.DA3 /tmp/spdk_tgt_config.json.CkZ 00:05:23.332 INFO: JSON config files are the same 00:05:23.332 + echo 'INFO: JSON config files are the same' 00:05:23.332 + rm /tmp/62.DA3 /tmp/spdk_tgt_config.json.CkZ 00:05:23.332 + exit 0 00:05:23.332 11:25:28 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:23.332 INFO: changing configuration and checking if this can be detected... 00:05:23.332 11:25:28 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.332 11:25:28 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.332 11:25:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.590 11:25:28 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.590 11:25:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:23.590 11:25:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.590 + '[' 2 -ne 2 ']' 00:05:23.590 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:23.590 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:23.590 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:23.590 +++ basename /dev/fd/62 00:05:23.590 ++ mktemp /tmp/62.XXX 00:05:23.590 + tmp_file_1=/tmp/62.WRQ 00:05:23.590 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.590 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.590 + tmp_file_2=/tmp/spdk_tgt_config.json.hxv 00:05:23.590 + ret=0 00:05:23.590 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.157 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.157 + diff -u /tmp/62.WRQ /tmp/spdk_tgt_config.json.hxv 00:05:24.157 + ret=1 00:05:24.157 + echo '=== Start of file: /tmp/62.WRQ ===' 00:05:24.157 + cat /tmp/62.WRQ 00:05:24.157 + echo '=== End of file: /tmp/62.WRQ ===' 00:05:24.157 + echo '' 00:05:24.157 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hxv ===' 00:05:24.157 + cat /tmp/spdk_tgt_config.json.hxv 00:05:24.157 + echo '=== End of file: /tmp/spdk_tgt_config.json.hxv ===' 00:05:24.157 + echo '' 00:05:24.157 + rm /tmp/62.WRQ /tmp/spdk_tgt_config.json.hxv 00:05:24.157 + exit 1 00:05:24.157 INFO: configuration change detected. 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@324 -- # [[ -n 61269 ]] 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.157 11:25:29 json_config -- json_config/json_config.sh@330 -- # killprocess 61269 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@954 -- # '[' -z 61269 ']' 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@958 -- # kill -0 61269 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@959 -- # uname 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61269 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.157 killing process with pid 61269 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61269' 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@973 -- # kill 61269 00:05:24.157 11:25:29 json_config -- common/autotest_common.sh@978 -- # wait 61269 00:05:24.415 11:25:29 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.415 11:25:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:24.415 11:25:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.415 11:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.415 11:25:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:24.415 INFO: Success 00:05:24.415 11:25:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:24.415 00:05:24.415 real 0m9.488s 00:05:24.415 user 0m13.852s 00:05:24.415 sys 0m2.044s 00:05:24.415 11:25:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.415 11:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.415 ************************************ 00:05:24.415 END TEST json_config 00:05:24.415 ************************************ 00:05:24.415 11:25:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.415 11:25:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.415 11:25:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.415 11:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.415 ************************************ 00:05:24.415 START TEST json_config_extra_key 00:05:24.415 ************************************ 00:05:24.415 11:25:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.674 --rc genhtml_branch_coverage=1 00:05:24.674 --rc genhtml_function_coverage=1 00:05:24.674 --rc genhtml_legend=1 00:05:24.674 --rc geninfo_all_blocks=1 00:05:24.674 --rc geninfo_unexecuted_blocks=1 00:05:24.674 00:05:24.674 ' 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.674 --rc genhtml_branch_coverage=1 00:05:24.674 --rc genhtml_function_coverage=1 00:05:24.674 --rc genhtml_legend=1 00:05:24.674 --rc geninfo_all_blocks=1 00:05:24.674 --rc geninfo_unexecuted_blocks=1 00:05:24.674 00:05:24.674 ' 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.674 --rc genhtml_branch_coverage=1 00:05:24.674 --rc genhtml_function_coverage=1 00:05:24.674 --rc genhtml_legend=1 00:05:24.674 --rc geninfo_all_blocks=1 00:05:24.674 --rc geninfo_unexecuted_blocks=1 00:05:24.674 00:05:24.674 ' 00:05:24.674 11:25:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.674 --rc genhtml_branch_coverage=1 00:05:24.674 --rc genhtml_function_coverage=1 00:05:24.674 --rc genhtml_legend=1 00:05:24.674 --rc geninfo_all_blocks=1 00:05:24.674 --rc geninfo_unexecuted_blocks=1 00:05:24.674 00:05:24.674 ' 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.674 11:25:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.674 11:25:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.674 11:25:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.674 11:25:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.674 11:25:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.674 11:25:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.674 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.674 11:25:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.674 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.675 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:24.675 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.675 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.675 INFO: launching applications... 00:05:24.675 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.675 11:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61453 00:05:24.675 Waiting for target to run... 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61453 /var/tmp/spdk_tgt.sock 00:05:24.675 11:25:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.675 11:25:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 61453 ']' 00:05:24.675 11:25:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.675 11:25:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.675 11:25:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.675 11:25:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.675 11:25:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.675 [2024-12-16 11:25:30.059055] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:24.675 [2024-12-16 11:25:30.059199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61453 ] 00:05:25.240 [2024-12-16 11:25:30.517780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.240 [2024-12-16 11:25:30.574949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.807 11:25:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.807 11:25:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.807 00:05:25.807 INFO: shutting down applications... 00:05:25.807 11:25:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.807 11:25:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61453 ]] 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61453 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61453 00:05:25.807 11:25:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.371 11:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.371 11:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.371 11:25:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61453 00:05:26.371 11:25:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.371 11:25:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:26.371 11:25:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.371 SPDK target shutdown done 00:05:26.371 11:25:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.371 Success 00:05:26.371 11:25:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:26.371 00:05:26.371 real 0m1.844s 00:05:26.371 user 0m1.821s 00:05:26.371 sys 0m0.486s 00:05:26.371 11:25:31 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.371 11:25:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.371 ************************************ 00:05:26.371 END TEST json_config_extra_key 00:05:26.371 ************************************ 00:05:26.372 11:25:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.372 11:25:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.372 11:25:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.372 11:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.372 ************************************ 00:05:26.372 START TEST alias_rpc 00:05:26.372 ************************************ 00:05:26.372 11:25:31 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.372 * Looking for test storage... 00:05:26.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:26.372 11:25:31 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.372 11:25:31 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.372 11:25:31 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.629 11:25:31 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.629 11:25:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:26.629 11:25:31 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.629 11:25:31 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.629 --rc genhtml_branch_coverage=1 00:05:26.629 --rc genhtml_function_coverage=1 00:05:26.629 --rc genhtml_legend=1 00:05:26.629 --rc geninfo_all_blocks=1 00:05:26.629 --rc geninfo_unexecuted_blocks=1 00:05:26.629 00:05:26.629 ' 00:05:26.629 11:25:31 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.629 --rc genhtml_branch_coverage=1 00:05:26.629 --rc genhtml_function_coverage=1 00:05:26.629 --rc genhtml_legend=1 00:05:26.630 --rc geninfo_all_blocks=1 00:05:26.630 --rc geninfo_unexecuted_blocks=1 00:05:26.630 00:05:26.630 ' 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.630 --rc genhtml_branch_coverage=1 00:05:26.630 --rc genhtml_function_coverage=1 00:05:26.630 --rc genhtml_legend=1 00:05:26.630 --rc geninfo_all_blocks=1 00:05:26.630 --rc geninfo_unexecuted_blocks=1 00:05:26.630 00:05:26.630 ' 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.630 --rc genhtml_branch_coverage=1 00:05:26.630 --rc genhtml_function_coverage=1 00:05:26.630 --rc genhtml_legend=1 00:05:26.630 --rc geninfo_all_blocks=1 00:05:26.630 --rc geninfo_unexecuted_blocks=1 00:05:26.630 00:05:26.630 ' 00:05:26.630 11:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.630 11:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61543 00:05:26.630 11:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.630 11:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61543 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 61543 ']' 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.630 11:25:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.630 [2024-12-16 11:25:31.911113] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:26.630 [2024-12-16 11:25:31.911226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61543 ] 00:05:26.888 [2024-12-16 11:25:32.061510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.888 [2024-12-16 11:25:32.128318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.144 11:25:32 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.144 11:25:32 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.144 11:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:27.401 11:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61543 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 61543 ']' 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 61543 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61543 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61543' 00:05:27.401 killing process with pid 61543 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@973 -- # kill 61543 00:05:27.401 11:25:32 alias_rpc -- common/autotest_common.sh@978 -- # wait 61543 00:05:27.966 00:05:27.966 real 0m1.527s 00:05:27.966 user 0m1.632s 00:05:27.966 sys 0m0.454s 00:05:27.966 11:25:33 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.966 11:25:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.966 ************************************ 00:05:27.966 END TEST alias_rpc 00:05:27.966 ************************************ 00:05:27.966 11:25:33 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:05:27.966 11:25:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.966 11:25:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.966 11:25:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.966 11:25:33 -- common/autotest_common.sh@10 -- # set +x 00:05:27.966 ************************************ 00:05:27.966 START TEST dpdk_mem_utility 00:05:27.966 ************************************ 00:05:27.966 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.966 * Looking for test storage... 00:05:27.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:27.966 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.966 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.966 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.224 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.224 11:25:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.225 11:25:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.225 --rc genhtml_branch_coverage=1 00:05:28.225 --rc genhtml_function_coverage=1 00:05:28.225 --rc genhtml_legend=1 00:05:28.225 --rc geninfo_all_blocks=1 00:05:28.225 --rc geninfo_unexecuted_blocks=1 00:05:28.225 00:05:28.225 ' 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.225 --rc genhtml_branch_coverage=1 00:05:28.225 --rc genhtml_function_coverage=1 00:05:28.225 --rc genhtml_legend=1 00:05:28.225 --rc geninfo_all_blocks=1 00:05:28.225 --rc geninfo_unexecuted_blocks=1 00:05:28.225 00:05:28.225 ' 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.225 --rc genhtml_branch_coverage=1 00:05:28.225 --rc genhtml_function_coverage=1 00:05:28.225 --rc genhtml_legend=1 00:05:28.225 --rc geninfo_all_blocks=1 00:05:28.225 --rc geninfo_unexecuted_blocks=1 00:05:28.225 00:05:28.225 ' 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.225 --rc genhtml_branch_coverage=1 00:05:28.225 --rc genhtml_function_coverage=1 00:05:28.225 --rc genhtml_legend=1 00:05:28.225 --rc geninfo_all_blocks=1 00:05:28.225 --rc geninfo_unexecuted_blocks=1 00:05:28.225 00:05:28.225 ' 00:05:28.225 11:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.225 11:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61629 00:05:28.225 11:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.225 11:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61629 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 61629 ']' 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.225 11:25:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.225 [2024-12-16 11:25:33.528262] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:28.225 [2024-12-16 11:25:33.528367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61629 ] 00:05:28.482 [2024-12-16 11:25:33.678655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.482 [2024-12-16 11:25:33.748344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.740 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.740 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:28.740 11:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.740 11:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.740 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.740 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.740 { 00:05:28.740 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.740 } 00:05:28.740 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.740 11:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.740 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:28.740 1 heaps totaling size 818.000000 MiB 00:05:28.740 size: 818.000000 MiB heap id: 0 00:05:28.740 end heaps---------- 00:05:28.740 9 mempools totaling size 603.782043 MiB 00:05:28.740 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.740 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.740 size: 100.555481 MiB name: bdev_io_61629 00:05:28.740 size: 50.003479 MiB name: msgpool_61629 00:05:28.740 size: 36.509338 MiB name: fsdev_io_61629 00:05:28.740 size: 21.763794 MiB name: PDU_Pool 00:05:28.740 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.740 size: 4.133484 MiB name: evtpool_61629 00:05:28.740 size: 0.026123 MiB name: Session_Pool 00:05:28.740 end mempools------- 00:05:28.740 6 memzones totaling size 4.142822 MiB 00:05:28.740 size: 1.000366 MiB name: RG_ring_0_61629 00:05:28.740 size: 1.000366 MiB name: RG_ring_1_61629 00:05:28.740 size: 1.000366 MiB name: RG_ring_4_61629 00:05:28.740 size: 1.000366 MiB name: RG_ring_5_61629 00:05:28.740 size: 0.125366 MiB name: RG_ring_2_61629 00:05:28.740 size: 0.015991 MiB name: RG_ring_3_61629 00:05:28.740 end memzones------- 00:05:28.740 11:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.999 heap id: 0 total size: 818.000000 MiB number of busy elements: 230 number of free elements: 15 00:05:28.999 list of free elements. size: 10.818420 MiB 00:05:28.999 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:28.999 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:28.999 element at address: 0x200000400000 with size: 0.996338 MiB 00:05:28.999 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:28.999 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:28.999 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:28.999 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:28.999 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:28.999 element at address: 0x20001ae00000 with size: 0.571533 MiB 00:05:28.999 element at address: 0x200000c00000 with size: 0.490662 MiB 00:05:28.999 element at address: 0x20000a600000 with size: 0.489441 MiB 00:05:28.999 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:28.999 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:28.999 element at address: 0x200028200000 with size: 0.398865 MiB 00:05:28.999 element at address: 0x200000800000 with size: 0.353394 MiB 00:05:28.999 list of standard malloc elements. size: 199.252686 MiB 00:05:28.999 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:28.999 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:28.999 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:28.999 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:28.999 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:28.999 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.999 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:28.999 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.999 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:28.999 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000085a780 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000085a980 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:28.999 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:28.999 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:28.999 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:29.000 element at address: 0x2000282661c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x200028266280 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826ce80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:29.000 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:29.000 list of memzone associated elements. size: 607.928894 MiB 00:05:29.000 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:29.000 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.000 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:29.000 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.000 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:29.000 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_61629_0 00:05:29.000 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:29.000 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61629_0 00:05:29.000 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:29.000 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_61629_0 00:05:29.000 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:29.000 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.000 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:29.000 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.000 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:29.000 associated memzone info: size: 3.000122 MiB name: MP_evtpool_61629_0 00:05:29.000 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:29.000 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61629 00:05:29.000 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:29.000 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61629 00:05:29.000 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:29.000 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.000 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:29.000 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.000 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:29.000 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.000 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:29.000 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.001 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:29.001 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61629 00:05:29.001 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:29.001 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61629 00:05:29.001 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:29.001 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61629 00:05:29.001 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:29.001 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61629 00:05:29.001 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:29.001 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_61629 00:05:29.001 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:29.001 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61629 00:05:29.001 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:29.001 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.001 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:29.001 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.001 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:29.001 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.001 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:29.001 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_61629 00:05:29.001 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:05:29.001 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61629 00:05:29.001 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:29.001 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.001 element at address: 0x200028266340 with size: 0.023743 MiB 00:05:29.001 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.001 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:05:29.001 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61629 00:05:29.001 element at address: 0x20002826c480 with size: 0.002441 MiB 00:05:29.001 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.001 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:29.001 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61629 00:05:29.001 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:29.001 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_61629 00:05:29.001 element at address: 0x20000085a840 with size: 0.000305 MiB 00:05:29.001 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61629 00:05:29.001 element at address: 0x20002826cf40 with size: 0.000305 MiB 00:05:29.001 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.001 11:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.001 11:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61629 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 61629 ']' 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 61629 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61629 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.001 killing process with pid 61629 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61629' 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 61629 00:05:29.001 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 61629 00:05:29.258 00:05:29.258 real 0m1.376s 00:05:29.258 user 0m1.331s 00:05:29.258 sys 0m0.448s 00:05:29.258 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.258 ************************************ 00:05:29.258 END TEST dpdk_mem_utility 00:05:29.258 ************************************ 00:05:29.258 11:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.516 11:25:34 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.516 11:25:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.516 11:25:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.516 11:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:29.516 ************************************ 00:05:29.516 START TEST event 00:05:29.516 ************************************ 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.516 * Looking for test storage... 00:05:29.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.516 11:25:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.516 11:25:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.516 11:25:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.516 11:25:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.516 11:25:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.516 11:25:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.516 11:25:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.516 11:25:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.516 11:25:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.516 11:25:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.516 11:25:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.516 11:25:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:29.516 11:25:34 event -- scripts/common.sh@345 -- # : 1 00:05:29.516 11:25:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.516 11:25:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.516 11:25:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:29.516 11:25:34 event -- scripts/common.sh@353 -- # local d=1 00:05:29.516 11:25:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.516 11:25:34 event -- scripts/common.sh@355 -- # echo 1 00:05:29.516 11:25:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.516 11:25:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:29.516 11:25:34 event -- scripts/common.sh@353 -- # local d=2 00:05:29.516 11:25:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.516 11:25:34 event -- scripts/common.sh@355 -- # echo 2 00:05:29.516 11:25:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.516 11:25:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.516 11:25:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.516 11:25:34 event -- scripts/common.sh@368 -- # return 0 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.516 --rc genhtml_branch_coverage=1 00:05:29.516 --rc genhtml_function_coverage=1 00:05:29.516 --rc genhtml_legend=1 00:05:29.516 --rc geninfo_all_blocks=1 00:05:29.516 --rc geninfo_unexecuted_blocks=1 00:05:29.516 00:05:29.516 ' 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.516 --rc genhtml_branch_coverage=1 00:05:29.516 --rc genhtml_function_coverage=1 00:05:29.516 --rc genhtml_legend=1 00:05:29.516 --rc geninfo_all_blocks=1 00:05:29.516 --rc geninfo_unexecuted_blocks=1 00:05:29.516 00:05:29.516 ' 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.516 --rc genhtml_branch_coverage=1 00:05:29.516 --rc genhtml_function_coverage=1 00:05:29.516 --rc genhtml_legend=1 00:05:29.516 --rc geninfo_all_blocks=1 00:05:29.516 --rc geninfo_unexecuted_blocks=1 00:05:29.516 00:05:29.516 ' 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.516 --rc genhtml_branch_coverage=1 00:05:29.516 --rc genhtml_function_coverage=1 00:05:29.516 --rc genhtml_legend=1 00:05:29.516 --rc geninfo_all_blocks=1 00:05:29.516 --rc geninfo_unexecuted_blocks=1 00:05:29.516 00:05:29.516 ' 00:05:29.516 11:25:34 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:29.516 11:25:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.516 11:25:34 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:29.516 11:25:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.516 11:25:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.516 ************************************ 00:05:29.516 START TEST event_perf 00:05:29.516 ************************************ 00:05:29.516 11:25:34 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.516 Running I/O for 1 seconds...[2024-12-16 11:25:34.896700] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:29.516 [2024-12-16 11:25:34.897324] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61719 ] 00:05:29.774 [2024-12-16 11:25:35.046373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.774 [2024-12-16 11:25:35.114042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.774 Running I/O for 1 seconds...[2024-12-16 11:25:35.114223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.774 [2024-12-16 11:25:35.114361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.774 [2024-12-16 11:25:35.114526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.147 00:05:31.147 lcore 0: 200849 00:05:31.147 lcore 1: 200848 00:05:31.147 lcore 2: 200850 00:05:31.147 lcore 3: 200851 00:05:31.147 done. 00:05:31.147 00:05:31.147 real 0m1.289s 00:05:31.147 user 0m4.116s 00:05:31.147 sys 0m0.053s 00:05:31.147 11:25:36 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.147 11:25:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.147 ************************************ 00:05:31.147 END TEST event_perf 00:05:31.147 ************************************ 00:05:31.147 11:25:36 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.147 11:25:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:31.147 11:25:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.147 11:25:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.147 ************************************ 00:05:31.147 START TEST event_reactor 00:05:31.147 ************************************ 00:05:31.147 11:25:36 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.147 [2024-12-16 11:25:36.235376] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:31.147 [2024-12-16 11:25:36.235587] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61752 ] 00:05:31.147 [2024-12-16 11:25:36.381218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.147 [2024-12-16 11:25:36.444819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.080 test_start 00:05:32.080 oneshot 00:05:32.080 tick 100 00:05:32.080 tick 100 00:05:32.080 tick 250 00:05:32.080 tick 100 00:05:32.080 tick 100 00:05:32.080 tick 100 00:05:32.080 tick 250 00:05:32.080 tick 500 00:05:32.080 tick 100 00:05:32.080 tick 100 00:05:32.080 tick 250 00:05:32.080 tick 100 00:05:32.080 tick 100 00:05:32.080 test_end 00:05:32.338 00:05:32.338 real 0m1.280s 00:05:32.338 user 0m1.130s 00:05:32.338 sys 0m0.042s 00:05:32.338 11:25:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.338 11:25:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:32.338 ************************************ 00:05:32.338 END TEST event_reactor 00:05:32.338 ************************************ 00:05:32.338 11:25:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.338 11:25:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.338 11:25:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.338 11:25:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.338 ************************************ 00:05:32.338 START TEST event_reactor_perf 00:05:32.338 ************************************ 00:05:32.338 11:25:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.338 [2024-12-16 11:25:37.562185] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:32.338 [2024-12-16 11:25:37.562294] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61787 ] 00:05:32.338 [2024-12-16 11:25:37.706147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.596 [2024-12-16 11:25:37.769447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.530 test_start 00:05:33.530 test_end 00:05:33.530 Performance: 369944 events per second 00:05:33.530 00:05:33.530 real 0m1.278s 00:05:33.530 user 0m1.125s 00:05:33.530 sys 0m0.046s 00:05:33.530 11:25:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.530 11:25:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.530 ************************************ 00:05:33.530 END TEST event_reactor_perf 00:05:33.530 ************************************ 00:05:33.530 11:25:38 event -- event/event.sh@49 -- # uname -s 00:05:33.530 11:25:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.530 11:25:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.530 11:25:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.530 11:25:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.530 11:25:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.530 ************************************ 00:05:33.530 START TEST event_scheduler 00:05:33.530 ************************************ 00:05:33.530 11:25:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.789 * Looking for test storage... 00:05:33.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:33.789 11:25:38 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.789 11:25:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.789 11:25:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.789 11:25:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.789 --rc genhtml_branch_coverage=1 00:05:33.789 --rc genhtml_function_coverage=1 00:05:33.789 --rc genhtml_legend=1 00:05:33.789 --rc geninfo_all_blocks=1 00:05:33.789 --rc geninfo_unexecuted_blocks=1 00:05:33.789 00:05:33.789 ' 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.789 --rc genhtml_branch_coverage=1 00:05:33.789 --rc genhtml_function_coverage=1 00:05:33.789 --rc genhtml_legend=1 00:05:33.789 --rc geninfo_all_blocks=1 00:05:33.789 --rc geninfo_unexecuted_blocks=1 00:05:33.789 00:05:33.789 ' 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.789 --rc genhtml_branch_coverage=1 00:05:33.789 --rc genhtml_function_coverage=1 00:05:33.789 --rc genhtml_legend=1 00:05:33.789 --rc geninfo_all_blocks=1 00:05:33.789 --rc geninfo_unexecuted_blocks=1 00:05:33.789 00:05:33.789 ' 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.789 --rc genhtml_branch_coverage=1 00:05:33.789 --rc genhtml_function_coverage=1 00:05:33.789 --rc genhtml_legend=1 00:05:33.789 --rc geninfo_all_blocks=1 00:05:33.789 --rc geninfo_unexecuted_blocks=1 00:05:33.789 00:05:33.789 ' 00:05:33.789 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.789 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61857 00:05:33.789 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.789 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.789 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61857 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 61857 ']' 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.789 11:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.789 [2024-12-16 11:25:39.132878] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:33.789 [2024-12-16 11:25:39.132995] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61857 ] 00:05:34.048 [2024-12-16 11:25:39.284908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.048 [2024-12-16 11:25:39.358104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.048 [2024-12-16 11:25:39.358236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.048 [2024-12-16 11:25:39.358613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.048 [2024-12-16 11:25:39.358625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.048 11:25:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.048 11:25:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:34.048 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:34.048 11:25:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.048 11:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.048 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.048 POWER: Cannot set governor of lcore 0 to performance 00:05:34.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.048 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.048 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.048 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.048 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:34.048 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:34.048 POWER: Unable to set Power Management Environment for lcore 0 00:05:34.049 [2024-12-16 11:25:39.406377] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:34.049 [2024-12-16 11:25:39.406620] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:34.049 [2024-12-16 11:25:39.406835] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:34.049 [2024-12-16 11:25:39.407085] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:34.049 [2024-12-16 11:25:39.407309] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:34.049 [2024-12-16 11:25:39.407532] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:34.049 11:25:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.049 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:34.049 11:25:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.049 11:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 [2024-12-16 11:25:39.510275] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:34.307 11:25:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:34.307 11:25:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.307 11:25:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 ************************************ 00:05:34.307 START TEST scheduler_create_thread 00:05:34.307 ************************************ 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 2 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 3 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 4 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 5 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 6 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 7 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 8 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 9 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 10 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.307 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.308 11:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.681 ************************************ 00:05:35.681 END TEST scheduler_create_thread 00:05:35.681 ************************************ 00:05:35.681 11:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.681 00:05:35.681 real 0m1.170s 00:05:35.681 user 0m0.014s 00:05:35.681 sys 0m0.004s 00:05:35.681 11:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.681 11:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.681 11:25:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.681 11:25:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61857 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 61857 ']' 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 61857 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61857 00:05:35.681 killing process with pid 61857 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61857' 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 61857 00:05:35.681 11:25:40 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 61857 00:05:35.938 [2024-12-16 11:25:41.173443] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.198 00:05:36.198 real 0m2.503s 00:05:36.198 user 0m2.755s 00:05:36.198 sys 0m0.365s 00:05:36.198 11:25:41 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.198 ************************************ 00:05:36.198 END TEST event_scheduler 00:05:36.198 ************************************ 00:05:36.198 11:25:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.198 11:25:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.198 11:25:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.198 11:25:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.198 11:25:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.198 11:25:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.198 ************************************ 00:05:36.198 START TEST app_repeat 00:05:36.198 ************************************ 00:05:36.198 11:25:41 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.198 Process app_repeat pid: 61939 00:05:36.198 spdk_app_start Round 0 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61939 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61939' 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.198 11:25:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61939 /var/tmp/spdk-nbd.sock 00:05:36.198 11:25:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61939 ']' 00:05:36.198 11:25:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.198 11:25:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.198 11:25:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.198 11:25:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.198 11:25:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.198 [2024-12-16 11:25:41.477193] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:36.198 [2024-12-16 11:25:41.478001] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61939 ] 00:05:36.456 [2024-12-16 11:25:41.627828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.456 [2024-12-16 11:25:41.709198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.456 [2024-12-16 11:25:41.709208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.456 11:25:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.456 11:25:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:36.456 11:25:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.022 Malloc0 00:05:37.022 11:25:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.280 Malloc1 00:05:37.280 11:25:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.280 11:25:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.538 /dev/nbd0 00:05:37.538 11:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.538 11:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.538 1+0 records in 00:05:37.538 1+0 records out 00:05:37.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318226 s, 12.9 MB/s 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.538 11:25:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:37.538 11:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.538 11:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.538 11:25:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.796 /dev/nbd1 00:05:37.796 11:25:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.796 11:25:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.796 11:25:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.054 1+0 records in 00:05:38.054 1+0 records out 00:05:38.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349067 s, 11.7 MB/s 00:05:38.054 11:25:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.054 11:25:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.054 11:25:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.054 11:25:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.054 11:25:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.054 11:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.054 11:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.054 11:25:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.054 11:25:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.054 11:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.312 { 00:05:38.312 "bdev_name": "Malloc0", 00:05:38.312 "nbd_device": "/dev/nbd0" 00:05:38.312 }, 00:05:38.312 { 00:05:38.312 "bdev_name": "Malloc1", 00:05:38.312 "nbd_device": "/dev/nbd1" 00:05:38.312 } 00:05:38.312 ]' 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.312 { 00:05:38.312 "bdev_name": "Malloc0", 00:05:38.312 "nbd_device": "/dev/nbd0" 00:05:38.312 }, 00:05:38.312 { 00:05:38.312 "bdev_name": "Malloc1", 00:05:38.312 "nbd_device": "/dev/nbd1" 00:05:38.312 } 00:05:38.312 ]' 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.312 /dev/nbd1' 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.312 /dev/nbd1' 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.312 11:25:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.312 256+0 records in 00:05:38.312 256+0 records out 00:05:38.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066018 s, 159 MB/s 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.313 256+0 records in 00:05:38.313 256+0 records out 00:05:38.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245765 s, 42.7 MB/s 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.313 256+0 records in 00:05:38.313 256+0 records out 00:05:38.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272807 s, 38.4 MB/s 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.313 11:25:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.879 11:25:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.137 11:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.394 11:25:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.394 11:25:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.663 11:25:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.921 [2024-12-16 11:25:45.133676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.921 [2024-12-16 11:25:45.195953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.921 [2024-12-16 11:25:45.195959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.921 [2024-12-16 11:25:45.250358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.921 [2024-12-16 11:25:45.250421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.228 11:25:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.228 spdk_app_start Round 1 00:05:43.228 11:25:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.228 11:25:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61939 /var/tmp/spdk-nbd.sock 00:05:43.228 11:25:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61939 ']' 00:05:43.228 11:25:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.228 11:25:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.228 11:25:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.228 11:25:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.228 11:25:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.228 11:25:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.228 11:25:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.228 11:25:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.228 Malloc0 00:05:43.228 11:25:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.487 Malloc1 00:05:43.487 11:25:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.487 11:25:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.745 /dev/nbd0 00:05:44.003 11:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.003 11:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.003 1+0 records in 00:05:44.003 1+0 records out 00:05:44.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220474 s, 18.6 MB/s 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.003 11:25:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.003 11:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.003 11:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.003 11:25:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.261 /dev/nbd1 00:05:44.261 11:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.261 11:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.261 1+0 records in 00:05:44.261 1+0 records out 00:05:44.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289189 s, 14.2 MB/s 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.261 11:25:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.261 11:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.261 11:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.261 11:25:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.261 11:25:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.261 11:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.519 { 00:05:44.519 "bdev_name": "Malloc0", 00:05:44.519 "nbd_device": "/dev/nbd0" 00:05:44.519 }, 00:05:44.519 { 00:05:44.519 "bdev_name": "Malloc1", 00:05:44.519 "nbd_device": "/dev/nbd1" 00:05:44.519 } 00:05:44.519 ]' 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.519 { 00:05:44.519 "bdev_name": "Malloc0", 00:05:44.519 "nbd_device": "/dev/nbd0" 00:05:44.519 }, 00:05:44.519 { 00:05:44.519 "bdev_name": "Malloc1", 00:05:44.519 "nbd_device": "/dev/nbd1" 00:05:44.519 } 00:05:44.519 ]' 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.519 /dev/nbd1' 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.519 /dev/nbd1' 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.519 11:25:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.778 256+0 records in 00:05:44.778 256+0 records out 00:05:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00799324 s, 131 MB/s 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.778 256+0 records in 00:05:44.778 256+0 records out 00:05:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274633 s, 38.2 MB/s 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.778 256+0 records in 00:05:44.778 256+0 records out 00:05:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029008 s, 36.1 MB/s 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.778 11:25:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.778 11:25:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.036 11:25:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.294 11:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.553 11:25:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.553 11:25:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.120 11:25:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.378 [2024-12-16 11:25:51.552304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.378 [2024-12-16 11:25:51.631560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.378 [2024-12-16 11:25:51.631578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.378 [2024-12-16 11:25:51.711194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.378 [2024-12-16 11:25:51.711273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.981 11:25:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.981 spdk_app_start Round 2 00:05:48.981 11:25:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.981 11:25:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61939 /var/tmp/spdk-nbd.sock 00:05:48.981 11:25:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61939 ']' 00:05:48.981 11:25:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.981 11:25:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.981 11:25:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.981 11:25:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.981 11:25:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.239 11:25:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.239 11:25:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.239 11:25:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.805 Malloc0 00:05:49.805 11:25:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.063 Malloc1 00:05:50.063 11:25:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.063 11:25:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.320 /dev/nbd0 00:05:50.320 11:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.320 11:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.320 1+0 records in 00:05:50.320 1+0 records out 00:05:50.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277402 s, 14.8 MB/s 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.320 11:25:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.320 11:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.320 11:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.320 11:25:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.579 /dev/nbd1 00:05:50.579 11:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.579 11:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.579 1+0 records in 00:05:50.579 1+0 records out 00:05:50.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348566 s, 11.8 MB/s 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.579 11:25:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.579 11:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.579 11:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.579 11:25:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.579 11:25:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.579 11:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.837 11:25:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.837 { 00:05:50.837 "bdev_name": "Malloc0", 00:05:50.837 "nbd_device": "/dev/nbd0" 00:05:50.837 }, 00:05:50.837 { 00:05:50.837 "bdev_name": "Malloc1", 00:05:50.837 "nbd_device": "/dev/nbd1" 00:05:50.837 } 00:05:50.837 ]' 00:05:50.837 11:25:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.837 { 00:05:50.837 "bdev_name": "Malloc0", 00:05:50.837 "nbd_device": "/dev/nbd0" 00:05:50.837 }, 00:05:50.837 { 00:05:50.837 "bdev_name": "Malloc1", 00:05:50.837 "nbd_device": "/dev/nbd1" 00:05:50.837 } 00:05:50.837 ]' 00:05:50.837 11:25:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.095 /dev/nbd1' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.095 /dev/nbd1' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.095 256+0 records in 00:05:51.095 256+0 records out 00:05:51.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732685 s, 143 MB/s 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.095 256+0 records in 00:05:51.095 256+0 records out 00:05:51.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261212 s, 40.1 MB/s 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.095 256+0 records in 00:05:51.095 256+0 records out 00:05:51.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289327 s, 36.2 MB/s 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.095 11:25:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.354 11:25:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.612 11:25:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.870 11:25:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.870 11:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.870 11:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.128 11:25:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.128 11:25:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.385 11:25:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.385 [2024-12-16 11:25:57.798481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.644 [2024-12-16 11:25:57.861896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.644 [2024-12-16 11:25:57.861910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.644 [2024-12-16 11:25:57.918140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.644 [2024-12-16 11:25:57.918214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.931 11:26:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61939 /var/tmp/spdk-nbd.sock 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61939 ']' 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.931 11:26:00 event.app_repeat -- event/event.sh@39 -- # killprocess 61939 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 61939 ']' 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 61939 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61939 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.931 killing process with pid 61939 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61939' 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 61939 00:05:55.931 11:26:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 61939 00:05:55.931 spdk_app_start is called in Round 0. 00:05:55.931 Shutdown signal received, stop current app iteration 00:05:55.931 Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 reinitialization... 00:05:55.931 spdk_app_start is called in Round 1. 00:05:55.931 Shutdown signal received, stop current app iteration 00:05:55.931 Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 reinitialization... 00:05:55.931 spdk_app_start is called in Round 2. 00:05:55.931 Shutdown signal received, stop current app iteration 00:05:55.931 Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 reinitialization... 00:05:55.931 spdk_app_start is called in Round 3. 00:05:55.931 Shutdown signal received, stop current app iteration 00:05:55.931 11:26:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.931 11:26:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:55.931 00:05:55.931 real 0m19.708s 00:05:55.931 user 0m44.880s 00:05:55.931 sys 0m3.303s 00:05:55.931 11:26:01 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.931 11:26:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.931 ************************************ 00:05:55.931 END TEST app_repeat 00:05:55.931 ************************************ 00:05:55.931 11:26:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.931 11:26:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:55.931 11:26:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.931 11:26:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.931 11:26:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.931 ************************************ 00:05:55.931 START TEST cpu_locks 00:05:55.931 ************************************ 00:05:55.931 11:26:01 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:55.931 * Looking for test storage... 00:05:55.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:55.931 11:26:01 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.931 11:26:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.931 11:26:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.193 11:26:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.193 --rc genhtml_branch_coverage=1 00:05:56.193 --rc genhtml_function_coverage=1 00:05:56.193 --rc genhtml_legend=1 00:05:56.193 --rc geninfo_all_blocks=1 00:05:56.193 --rc geninfo_unexecuted_blocks=1 00:05:56.193 00:05:56.193 ' 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.193 --rc genhtml_branch_coverage=1 00:05:56.193 --rc genhtml_function_coverage=1 00:05:56.193 --rc genhtml_legend=1 00:05:56.193 --rc geninfo_all_blocks=1 00:05:56.193 --rc geninfo_unexecuted_blocks=1 00:05:56.193 00:05:56.193 ' 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.193 --rc genhtml_branch_coverage=1 00:05:56.193 --rc genhtml_function_coverage=1 00:05:56.193 --rc genhtml_legend=1 00:05:56.193 --rc geninfo_all_blocks=1 00:05:56.193 --rc geninfo_unexecuted_blocks=1 00:05:56.193 00:05:56.193 ' 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.193 --rc genhtml_branch_coverage=1 00:05:56.193 --rc genhtml_function_coverage=1 00:05:56.193 --rc genhtml_legend=1 00:05:56.193 --rc geninfo_all_blocks=1 00:05:56.193 --rc geninfo_unexecuted_blocks=1 00:05:56.193 00:05:56.193 ' 00:05:56.193 11:26:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.193 11:26:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.193 11:26:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.193 11:26:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.193 11:26:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.193 ************************************ 00:05:56.193 START TEST default_locks 00:05:56.193 ************************************ 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62579 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62579 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 62579 ']' 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.193 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.193 [2024-12-16 11:26:01.486943] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:56.193 [2024-12-16 11:26:01.487060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62579 ] 00:05:56.452 [2024-12-16 11:26:01.630440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.452 [2024-12-16 11:26:01.692231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.717 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.717 11:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:56.717 11:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62579 00:05:56.717 11:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62579 00:05:56.717 11:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62579 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 62579 ']' 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 62579 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62579 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.284 killing process with pid 62579 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62579' 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 62579 00:05:57.284 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 62579 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62579 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 62579 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 62579 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 62579 ']' 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (62579) - No such process 00:05:57.542 ERROR: process (pid: 62579) is no longer running 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.542 11:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.543 11:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.543 11:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.543 11:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.543 00:05:57.543 real 0m1.464s 00:05:57.543 user 0m1.437s 00:05:57.543 sys 0m0.562s 00:05:57.543 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.543 11:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.543 ************************************ 00:05:57.543 END TEST default_locks 00:05:57.543 ************************************ 00:05:57.543 11:26:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.543 11:26:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.543 11:26:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.543 11:26:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.543 ************************************ 00:05:57.543 START TEST default_locks_via_rpc 00:05:57.543 ************************************ 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62625 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62625 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62625 ']' 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.543 11:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.801 [2024-12-16 11:26:03.015905] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:57.801 [2024-12-16 11:26:03.016051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62625 ] 00:05:57.801 [2024-12-16 11:26:03.166431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.060 [2024-12-16 11:26:03.238890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62625 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.318 11:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62625 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62625 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 62625 ']' 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 62625 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62625 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62625' 00:05:58.886 killing process with pid 62625 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 62625 00:05:58.886 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 62625 00:05:59.145 00:05:59.145 real 0m1.524s 00:05:59.145 user 0m1.488s 00:05:59.145 sys 0m0.595s 00:05:59.145 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.145 11:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.145 ************************************ 00:05:59.145 END TEST default_locks_via_rpc 00:05:59.145 ************************************ 00:05:59.145 11:26:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.145 11:26:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.145 11:26:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.145 11:26:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.145 ************************************ 00:05:59.145 START TEST non_locking_app_on_locked_coremask 00:05:59.145 ************************************ 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62681 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62681 /var/tmp/spdk.sock 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62681 ']' 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.145 11:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.404 [2024-12-16 11:26:04.596202] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:59.404 [2024-12-16 11:26:04.596313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62681 ] 00:05:59.404 [2024-12-16 11:26:04.749372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.404 [2024-12-16 11:26:04.815917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.338 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.338 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.338 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62714 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62714 /var/tmp/spdk2.sock 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62714 ']' 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.339 11:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.339 [2024-12-16 11:26:05.682575] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:00.339 [2024-12-16 11:26:05.682671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62714 ] 00:06:00.598 [2024-12-16 11:26:05.842058] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.598 [2024-12-16 11:26:05.842137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.598 [2024-12-16 11:26:05.978050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.533 11:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.533 11:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.533 11:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62681 00:06:01.533 11:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62681 00:06:01.533 11:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62681 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62681 ']' 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 62681 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62681 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.469 killing process with pid 62681 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62681' 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 62681 00:06:02.469 11:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 62681 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62714 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62714 ']' 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 62714 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62714 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.037 killing process with pid 62714 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62714' 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 62714 00:06:03.037 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 62714 00:06:03.605 00:06:03.605 real 0m4.278s 00:06:03.605 user 0m4.804s 00:06:03.605 sys 0m1.203s 00:06:03.605 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.605 11:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.605 ************************************ 00:06:03.605 END TEST non_locking_app_on_locked_coremask 00:06:03.605 ************************************ 00:06:03.605 11:26:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:03.605 11:26:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.605 11:26:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.605 11:26:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.605 ************************************ 00:06:03.605 START TEST locking_app_on_unlocked_coremask 00:06:03.605 ************************************ 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62793 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62793 /var/tmp/spdk.sock 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62793 ']' 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.605 11:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.605 [2024-12-16 11:26:08.913694] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:03.605 [2024-12-16 11:26:08.913796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62793 ] 00:06:03.864 [2024-12-16 11:26:09.054805] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.864 [2024-12-16 11:26:09.054873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.864 [2024-12-16 11:26:09.121388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62821 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62821 /var/tmp/spdk2.sock 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62821 ']' 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.799 11:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.799 [2024-12-16 11:26:10.006560] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:04.799 [2024-12-16 11:26:10.006678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62821 ] 00:06:04.799 [2024-12-16 11:26:10.170372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.057 [2024-12-16 11:26:10.305720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.626 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.626 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.626 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62821 00:06:05.626 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62821 00:06:05.626 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62793 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62793 ']' 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 62793 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62793 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.561 killing process with pid 62793 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62793' 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 62793 00:06:06.561 11:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 62793 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62821 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62821 ']' 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 62821 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62821 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.127 killing process with pid 62821 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62821' 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 62821 00:06:07.127 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 62821 00:06:07.694 00:06:07.694 real 0m4.068s 00:06:07.694 user 0m4.546s 00:06:07.694 sys 0m1.090s 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.694 ************************************ 00:06:07.694 END TEST locking_app_on_unlocked_coremask 00:06:07.694 ************************************ 00:06:07.694 11:26:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.694 11:26:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.694 11:26:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.694 11:26:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.694 ************************************ 00:06:07.694 START TEST locking_app_on_locked_coremask 00:06:07.694 ************************************ 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62900 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62900 /var/tmp/spdk.sock 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62900 ']' 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.694 11:26:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.694 [2024-12-16 11:26:13.042590] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:07.694 [2024-12-16 11:26:13.042712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62900 ] 00:06:07.953 [2024-12-16 11:26:13.190682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.953 [2024-12-16 11:26:13.251278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62915 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62915 /var/tmp/spdk2.sock 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 62915 /var/tmp/spdk2.sock 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 62915 /var/tmp/spdk2.sock 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62915 ']' 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.211 11:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.211 [2024-12-16 11:26:13.620946] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:08.211 [2024-12-16 11:26:13.621083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62915 ] 00:06:08.469 [2024-12-16 11:26:13.775704] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62900 has claimed it. 00:06:08.469 [2024-12-16 11:26:13.775787] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.036 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (62915) - No such process 00:06:09.036 ERROR: process (pid: 62915) is no longer running 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62900 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.036 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62900 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62900 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62900 ']' 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 62900 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62900 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.602 killing process with pid 62900 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62900' 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 62900 00:06:09.602 11:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 62900 00:06:09.861 00:06:09.861 real 0m2.262s 00:06:09.861 user 0m2.469s 00:06:09.861 sys 0m0.689s 00:06:09.861 11:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.861 ************************************ 00:06:09.861 END TEST locking_app_on_locked_coremask 00:06:09.861 11:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.861 ************************************ 00:06:10.119 11:26:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.119 11:26:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.119 11:26:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.119 11:26:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.119 ************************************ 00:06:10.119 START TEST locking_overlapped_coremask 00:06:10.119 ************************************ 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62966 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62966 /var/tmp/spdk.sock 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 62966 ']' 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.119 11:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.119 [2024-12-16 11:26:15.361402] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:10.119 [2024-12-16 11:26:15.361541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62966 ] 00:06:10.119 [2024-12-16 11:26:15.515572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.378 [2024-12-16 11:26:15.593307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.378 [2024-12-16 11:26:15.593419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.378 [2024-12-16 11:26:15.593422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62996 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62996 /var/tmp/spdk2.sock 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 62996 /var/tmp/spdk2.sock 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 62996 /var/tmp/spdk2.sock 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 62996 ']' 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.313 11:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.313 [2024-12-16 11:26:16.508899] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:11.313 [2024-12-16 11:26:16.509068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62996 ] 00:06:11.313 [2024-12-16 11:26:16.680340] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62966 has claimed it. 00:06:11.313 [2024-12-16 11:26:16.680434] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.880 ERROR: process (pid: 62996) is no longer running 00:06:11.880 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (62996) - No such process 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62966 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 62966 ']' 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 62966 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62966 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.880 killing process with pid 62966 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62966' 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 62966 00:06:11.880 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 62966 00:06:12.446 00:06:12.446 real 0m2.421s 00:06:12.446 user 0m6.970s 00:06:12.446 sys 0m0.476s 00:06:12.446 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.446 11:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.446 ************************************ 00:06:12.446 END TEST locking_overlapped_coremask 00:06:12.446 ************************************ 00:06:12.446 11:26:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.446 11:26:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.446 11:26:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.446 11:26:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.447 ************************************ 00:06:12.447 START TEST locking_overlapped_coremask_via_rpc 00:06:12.447 ************************************ 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63049 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63049 /var/tmp/spdk.sock 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63049 ']' 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.447 11:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.447 [2024-12-16 11:26:17.842228] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:12.447 [2024-12-16 11:26:17.842391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63049 ] 00:06:12.705 [2024-12-16 11:26:17.996098] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.705 [2024-12-16 11:26:17.996370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.705 [2024-12-16 11:26:18.067746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.705 [2024-12-16 11:26:18.067838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.705 [2024-12-16 11:26:18.067844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63079 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63079 /var/tmp/spdk2.sock 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63079 ']' 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.663 11:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.663 [2024-12-16 11:26:18.904506] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:13.663 [2024-12-16 11:26:18.904598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63079 ] 00:06:13.663 [2024-12-16 11:26:19.064445] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.663 [2024-12-16 11:26:19.064502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.922 [2024-12-16 11:26:19.209669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.922 [2024-12-16 11:26:19.213142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.922 [2024-12-16 11:26:19.213143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.858 [2024-12-16 11:26:19.979191] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63049 has claimed it. 00:06:14.858 2024/12/16 11:26:19 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:14.858 request: 00:06:14.858 { 00:06:14.858 "method": "framework_enable_cpumask_locks", 00:06:14.858 "params": {} 00:06:14.858 } 00:06:14.858 Got JSON-RPC error response 00:06:14.858 GoRPCClient: error on JSON-RPC call 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63049 /var/tmp/spdk.sock 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63049 ']' 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.858 11:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63079 /var/tmp/spdk2.sock 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63079 ']' 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.117 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.376 ************************************ 00:06:15.376 END TEST locking_overlapped_coremask_via_rpc 00:06:15.376 ************************************ 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.376 00:06:15.376 real 0m2.901s 00:06:15.376 user 0m1.587s 00:06:15.376 sys 0m0.241s 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.376 11:26:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.376 11:26:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:15.376 11:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63049 ]] 00:06:15.376 11:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63049 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63049 ']' 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63049 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63049 00:06:15.376 killing process with pid 63049 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63049' 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 63049 00:06:15.376 11:26:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 63049 00:06:15.944 11:26:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63079 ]] 00:06:15.944 11:26:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63079 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63079 ']' 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63079 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63079 00:06:15.944 killing process with pid 63079 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63079' 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 63079 00:06:15.944 11:26:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 63079 00:06:16.202 11:26:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.202 11:26:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:16.202 11:26:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63049 ]] 00:06:16.202 11:26:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63049 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63049 ']' 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63049 00:06:16.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (63049) - No such process 00:06:16.202 Process with pid 63049 is not found 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 63049 is not found' 00:06:16.202 11:26:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63079 ]] 00:06:16.202 11:26:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63079 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63079 ']' 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63079 00:06:16.202 Process with pid 63079 is not found 00:06:16.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (63079) - No such process 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 63079 is not found' 00:06:16.202 11:26:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.202 00:06:16.202 real 0m20.375s 00:06:16.202 user 0m37.319s 00:06:16.202 sys 0m5.785s 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.202 11:26:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.202 ************************************ 00:06:16.202 END TEST cpu_locks 00:06:16.202 ************************************ 00:06:16.461 00:06:16.461 real 0m46.927s 00:06:16.461 user 1m31.536s 00:06:16.461 sys 0m9.863s 00:06:16.461 11:26:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.461 11:26:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.461 ************************************ 00:06:16.461 END TEST event 00:06:16.461 ************************************ 00:06:16.461 11:26:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.461 11:26:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.461 11:26:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.461 11:26:21 -- common/autotest_common.sh@10 -- # set +x 00:06:16.461 ************************************ 00:06:16.461 START TEST thread 00:06:16.461 ************************************ 00:06:16.461 11:26:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.461 * Looking for test storage... 00:06:16.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:16.461 11:26:21 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.461 11:26:21 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.461 11:26:21 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.461 11:26:21 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.461 11:26:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.461 11:26:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.461 11:26:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.461 11:26:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.461 11:26:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.462 11:26:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.462 11:26:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.462 11:26:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.462 11:26:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.462 11:26:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.462 11:26:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.462 11:26:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:16.462 11:26:21 thread -- scripts/common.sh@345 -- # : 1 00:06:16.462 11:26:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.462 11:26:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.462 11:26:21 thread -- scripts/common.sh@365 -- # decimal 1 00:06:16.462 11:26:21 thread -- scripts/common.sh@353 -- # local d=1 00:06:16.462 11:26:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.462 11:26:21 thread -- scripts/common.sh@355 -- # echo 1 00:06:16.462 11:26:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.462 11:26:21 thread -- scripts/common.sh@366 -- # decimal 2 00:06:16.462 11:26:21 thread -- scripts/common.sh@353 -- # local d=2 00:06:16.462 11:26:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.462 11:26:21 thread -- scripts/common.sh@355 -- # echo 2 00:06:16.462 11:26:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.462 11:26:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.462 11:26:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.462 11:26:21 thread -- scripts/common.sh@368 -- # return 0 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.462 --rc genhtml_branch_coverage=1 00:06:16.462 --rc genhtml_function_coverage=1 00:06:16.462 --rc genhtml_legend=1 00:06:16.462 --rc geninfo_all_blocks=1 00:06:16.462 --rc geninfo_unexecuted_blocks=1 00:06:16.462 00:06:16.462 ' 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.462 --rc genhtml_branch_coverage=1 00:06:16.462 --rc genhtml_function_coverage=1 00:06:16.462 --rc genhtml_legend=1 00:06:16.462 --rc geninfo_all_blocks=1 00:06:16.462 --rc geninfo_unexecuted_blocks=1 00:06:16.462 00:06:16.462 ' 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.462 --rc genhtml_branch_coverage=1 00:06:16.462 --rc genhtml_function_coverage=1 00:06:16.462 --rc genhtml_legend=1 00:06:16.462 --rc geninfo_all_blocks=1 00:06:16.462 --rc geninfo_unexecuted_blocks=1 00:06:16.462 00:06:16.462 ' 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.462 --rc genhtml_branch_coverage=1 00:06:16.462 --rc genhtml_function_coverage=1 00:06:16.462 --rc genhtml_legend=1 00:06:16.462 --rc geninfo_all_blocks=1 00:06:16.462 --rc geninfo_unexecuted_blocks=1 00:06:16.462 00:06:16.462 ' 00:06:16.462 11:26:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.462 11:26:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.462 ************************************ 00:06:16.462 START TEST thread_poller_perf 00:06:16.462 ************************************ 00:06:16.462 11:26:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.462 [2024-12-16 11:26:21.857711] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:16.462 [2024-12-16 11:26:21.858117] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63239 ] 00:06:16.721 [2024-12-16 11:26:22.007750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.721 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.721 [2024-12-16 11:26:22.080930] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.095 [2024-12-16T11:26:23.509Z] ====================================== 00:06:18.095 [2024-12-16T11:26:23.509Z] busy:2211059791 (cyc) 00:06:18.095 [2024-12-16T11:26:23.509Z] total_run_count: 311000 00:06:18.095 [2024-12-16T11:26:23.509Z] tsc_hz: 2200000000 (cyc) 00:06:18.095 [2024-12-16T11:26:23.509Z] ====================================== 00:06:18.095 [2024-12-16T11:26:23.509Z] poller_cost: 7109 (cyc), 3231 (nsec) 00:06:18.095 00:06:18.095 real 0m1.295s 00:06:18.095 user 0m1.146s 00:06:18.095 sys 0m0.042s 00:06:18.095 11:26:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.095 ************************************ 00:06:18.095 END TEST thread_poller_perf 00:06:18.095 ************************************ 00:06:18.095 11:26:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.095 11:26:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.095 11:26:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:18.095 11:26:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.095 11:26:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.095 ************************************ 00:06:18.095 START TEST thread_poller_perf 00:06:18.095 ************************************ 00:06:18.095 11:26:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.095 [2024-12-16 11:26:23.211842] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:18.095 [2024-12-16 11:26:23.211961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:06:18.095 [2024-12-16 11:26:23.361701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.095 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.095 [2024-12-16 11:26:23.426492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.475 [2024-12-16T11:26:24.889Z] ====================================== 00:06:19.475 [2024-12-16T11:26:24.889Z] busy:2201979186 (cyc) 00:06:19.475 [2024-12-16T11:26:24.889Z] total_run_count: 3804000 00:06:19.475 [2024-12-16T11:26:24.889Z] tsc_hz: 2200000000 (cyc) 00:06:19.475 [2024-12-16T11:26:24.889Z] ====================================== 00:06:19.475 [2024-12-16T11:26:24.889Z] poller_cost: 578 (cyc), 262 (nsec) 00:06:19.475 00:06:19.475 real 0m1.286s 00:06:19.475 user 0m1.134s 00:06:19.475 sys 0m0.045s 00:06:19.475 11:26:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.475 ************************************ 00:06:19.475 11:26:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 END TEST thread_poller_perf 00:06:19.475 ************************************ 00:06:19.475 11:26:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:19.475 00:06:19.475 real 0m2.853s 00:06:19.475 user 0m2.414s 00:06:19.475 sys 0m0.229s 00:06:19.475 11:26:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.475 11:26:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 ************************************ 00:06:19.475 END TEST thread 00:06:19.475 ************************************ 00:06:19.475 11:26:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:19.475 11:26:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.475 11:26:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.475 11:26:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.475 11:26:24 -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 ************************************ 00:06:19.475 START TEST app_cmdline 00:06:19.475 ************************************ 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.475 * Looking for test storage... 00:06:19.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.475 11:26:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.475 --rc genhtml_branch_coverage=1 00:06:19.475 --rc genhtml_function_coverage=1 00:06:19.475 --rc genhtml_legend=1 00:06:19.475 --rc geninfo_all_blocks=1 00:06:19.475 --rc geninfo_unexecuted_blocks=1 00:06:19.475 00:06:19.475 ' 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.475 --rc genhtml_branch_coverage=1 00:06:19.475 --rc genhtml_function_coverage=1 00:06:19.475 --rc genhtml_legend=1 00:06:19.475 --rc geninfo_all_blocks=1 00:06:19.475 --rc geninfo_unexecuted_blocks=1 00:06:19.475 00:06:19.475 ' 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.475 --rc genhtml_branch_coverage=1 00:06:19.475 --rc genhtml_function_coverage=1 00:06:19.475 --rc genhtml_legend=1 00:06:19.475 --rc geninfo_all_blocks=1 00:06:19.475 --rc geninfo_unexecuted_blocks=1 00:06:19.475 00:06:19.475 ' 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.475 --rc genhtml_branch_coverage=1 00:06:19.475 --rc genhtml_function_coverage=1 00:06:19.475 --rc genhtml_legend=1 00:06:19.475 --rc geninfo_all_blocks=1 00:06:19.475 --rc geninfo_unexecuted_blocks=1 00:06:19.475 00:06:19.475 ' 00:06:19.475 11:26:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:19.475 11:26:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63357 00:06:19.475 11:26:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63357 00:06:19.475 11:26:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 63357 ']' 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.475 11:26:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 [2024-12-16 11:26:24.840768] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:19.475 [2024-12-16 11:26:24.840879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63357 ] 00:06:19.734 [2024-12-16 11:26:24.994720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.734 [2024-12-16 11:26:25.065453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.670 11:26:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.670 11:26:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:20.670 11:26:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:20.929 { 00:06:20.929 "fields": { 00:06:20.929 "commit": "a393e5e6e", 00:06:20.929 "major": 25, 00:06:20.929 "minor": 1, 00:06:20.929 "patch": 0, 00:06:20.929 "suffix": "-pre" 00:06:20.929 }, 00:06:20.929 "version": "SPDK v25.01-pre git sha1 a393e5e6e" 00:06:20.929 } 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:20.929 11:26:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:20.929 11:26:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.188 2024/12/16 11:26:26 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:21.189 request: 00:06:21.189 { 00:06:21.189 "method": "env_dpdk_get_mem_stats", 00:06:21.189 "params": {} 00:06:21.189 } 00:06:21.189 Got JSON-RPC error response 00:06:21.189 GoRPCClient: error on JSON-RPC call 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.189 11:26:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63357 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 63357 ']' 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 63357 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63357 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.189 killing process with pid 63357 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63357' 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 63357 00:06:21.189 11:26:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 63357 00:06:21.448 00:06:21.448 real 0m2.264s 00:06:21.448 user 0m2.742s 00:06:21.448 sys 0m0.570s 00:06:21.448 ************************************ 00:06:21.448 END TEST app_cmdline 00:06:21.448 ************************************ 00:06:21.448 11:26:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.448 11:26:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.707 11:26:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:21.707 11:26:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.707 11:26:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.707 11:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:21.707 ************************************ 00:06:21.707 START TEST version 00:06:21.707 ************************************ 00:06:21.707 11:26:26 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:21.707 * Looking for test storage... 00:06:21.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:21.707 11:26:26 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.707 11:26:26 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.707 11:26:26 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.707 11:26:27 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.707 11:26:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.707 11:26:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.707 11:26:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.707 11:26:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.707 11:26:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.707 11:26:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.707 11:26:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.707 11:26:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.707 11:26:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.707 11:26:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.707 11:26:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.707 11:26:27 version -- scripts/common.sh@344 -- # case "$op" in 00:06:21.707 11:26:27 version -- scripts/common.sh@345 -- # : 1 00:06:21.707 11:26:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.707 11:26:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.707 11:26:27 version -- scripts/common.sh@365 -- # decimal 1 00:06:21.707 11:26:27 version -- scripts/common.sh@353 -- # local d=1 00:06:21.707 11:26:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.707 11:26:27 version -- scripts/common.sh@355 -- # echo 1 00:06:21.707 11:26:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.707 11:26:27 version -- scripts/common.sh@366 -- # decimal 2 00:06:21.707 11:26:27 version -- scripts/common.sh@353 -- # local d=2 00:06:21.707 11:26:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.707 11:26:27 version -- scripts/common.sh@355 -- # echo 2 00:06:21.707 11:26:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.707 11:26:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.707 11:26:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.707 11:26:27 version -- scripts/common.sh@368 -- # return 0 00:06:21.707 11:26:27 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.707 11:26:27 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.707 --rc genhtml_branch_coverage=1 00:06:21.707 --rc genhtml_function_coverage=1 00:06:21.707 --rc genhtml_legend=1 00:06:21.707 --rc geninfo_all_blocks=1 00:06:21.707 --rc geninfo_unexecuted_blocks=1 00:06:21.707 00:06:21.707 ' 00:06:21.707 11:26:27 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.707 --rc genhtml_branch_coverage=1 00:06:21.707 --rc genhtml_function_coverage=1 00:06:21.707 --rc genhtml_legend=1 00:06:21.707 --rc geninfo_all_blocks=1 00:06:21.707 --rc geninfo_unexecuted_blocks=1 00:06:21.707 00:06:21.707 ' 00:06:21.707 11:26:27 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.707 --rc genhtml_branch_coverage=1 00:06:21.707 --rc genhtml_function_coverage=1 00:06:21.707 --rc genhtml_legend=1 00:06:21.707 --rc geninfo_all_blocks=1 00:06:21.707 --rc geninfo_unexecuted_blocks=1 00:06:21.707 00:06:21.707 ' 00:06:21.707 11:26:27 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.707 --rc genhtml_branch_coverage=1 00:06:21.707 --rc genhtml_function_coverage=1 00:06:21.707 --rc genhtml_legend=1 00:06:21.707 --rc geninfo_all_blocks=1 00:06:21.707 --rc geninfo_unexecuted_blocks=1 00:06:21.707 00:06:21.707 ' 00:06:21.707 11:26:27 version -- app/version.sh@17 -- # get_header_version major 00:06:21.707 11:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # cut -f2 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.707 11:26:27 version -- app/version.sh@17 -- # major=25 00:06:21.707 11:26:27 version -- app/version.sh@18 -- # get_header_version minor 00:06:21.707 11:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # cut -f2 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.707 11:26:27 version -- app/version.sh@18 -- # minor=1 00:06:21.707 11:26:27 version -- app/version.sh@19 -- # get_header_version patch 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # cut -f2 00:06:21.707 11:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.707 11:26:27 version -- app/version.sh@19 -- # patch=0 00:06:21.707 11:26:27 version -- app/version.sh@20 -- # get_header_version suffix 00:06:21.707 11:26:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # cut -f2 00:06:21.707 11:26:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.707 11:26:27 version -- app/version.sh@20 -- # suffix=-pre 00:06:21.707 11:26:27 version -- app/version.sh@22 -- # version=25.1 00:06:21.707 11:26:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:21.707 11:26:27 version -- app/version.sh@28 -- # version=25.1rc0 00:06:21.707 11:26:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:21.707 11:26:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:21.967 11:26:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:21.967 11:26:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:21.967 00:06:21.967 real 0m0.252s 00:06:21.967 user 0m0.162s 00:06:21.967 sys 0m0.126s 00:06:21.967 11:26:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.967 11:26:27 version -- common/autotest_common.sh@10 -- # set +x 00:06:21.967 ************************************ 00:06:21.967 END TEST version 00:06:21.967 ************************************ 00:06:21.967 11:26:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:21.967 11:26:27 -- spdk/autotest.sh@194 -- # uname -s 00:06:21.967 11:26:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:21.967 11:26:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:21.967 11:26:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:21.967 11:26:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:21.967 11:26:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.967 11:26:27 -- common/autotest_common.sh@10 -- # set +x 00:06:21.967 11:26:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:21.967 11:26:27 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:21.967 11:26:27 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:21.967 11:26:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.967 11:26:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.967 11:26:27 -- common/autotest_common.sh@10 -- # set +x 00:06:21.967 ************************************ 00:06:21.967 START TEST nvmf_tcp 00:06:21.967 ************************************ 00:06:21.967 11:26:27 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:21.967 * Looking for test storage... 00:06:21.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:21.967 11:26:27 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.967 11:26:27 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.967 11:26:27 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.228 11:26:27 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.228 --rc genhtml_branch_coverage=1 00:06:22.228 --rc genhtml_function_coverage=1 00:06:22.228 --rc genhtml_legend=1 00:06:22.228 --rc geninfo_all_blocks=1 00:06:22.228 --rc geninfo_unexecuted_blocks=1 00:06:22.228 00:06:22.228 ' 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.228 --rc genhtml_branch_coverage=1 00:06:22.228 --rc genhtml_function_coverage=1 00:06:22.228 --rc genhtml_legend=1 00:06:22.228 --rc geninfo_all_blocks=1 00:06:22.228 --rc geninfo_unexecuted_blocks=1 00:06:22.228 00:06:22.228 ' 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.228 --rc genhtml_branch_coverage=1 00:06:22.228 --rc genhtml_function_coverage=1 00:06:22.228 --rc genhtml_legend=1 00:06:22.228 --rc geninfo_all_blocks=1 00:06:22.228 --rc geninfo_unexecuted_blocks=1 00:06:22.228 00:06:22.228 ' 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.228 --rc genhtml_branch_coverage=1 00:06:22.228 --rc genhtml_function_coverage=1 00:06:22.228 --rc genhtml_legend=1 00:06:22.228 --rc geninfo_all_blocks=1 00:06:22.228 --rc geninfo_unexecuted_blocks=1 00:06:22.228 00:06:22.228 ' 00:06:22.228 11:26:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:22.228 11:26:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:22.228 11:26:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.228 11:26:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.228 ************************************ 00:06:22.228 START TEST nvmf_target_core 00:06:22.228 ************************************ 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:22.228 * Looking for test storage... 00:06:22.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:22.228 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.229 --rc genhtml_branch_coverage=1 00:06:22.229 --rc genhtml_function_coverage=1 00:06:22.229 --rc genhtml_legend=1 00:06:22.229 --rc geninfo_all_blocks=1 00:06:22.229 --rc geninfo_unexecuted_blocks=1 00:06:22.229 00:06:22.229 ' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.229 --rc genhtml_branch_coverage=1 00:06:22.229 --rc genhtml_function_coverage=1 00:06:22.229 --rc genhtml_legend=1 00:06:22.229 --rc geninfo_all_blocks=1 00:06:22.229 --rc geninfo_unexecuted_blocks=1 00:06:22.229 00:06:22.229 ' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.229 --rc genhtml_branch_coverage=1 00:06:22.229 --rc genhtml_function_coverage=1 00:06:22.229 --rc genhtml_legend=1 00:06:22.229 --rc geninfo_all_blocks=1 00:06:22.229 --rc geninfo_unexecuted_blocks=1 00:06:22.229 00:06:22.229 ' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.229 --rc genhtml_branch_coverage=1 00:06:22.229 --rc genhtml_function_coverage=1 00:06:22.229 --rc genhtml_legend=1 00:06:22.229 --rc geninfo_all_blocks=1 00:06:22.229 --rc geninfo_unexecuted_blocks=1 00:06:22.229 00:06:22.229 ' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:22.229 ************************************ 00:06:22.229 START TEST nvmf_abort 00:06:22.229 ************************************ 00:06:22.229 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:22.488 * Looking for test storage... 00:06:22.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.488 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.489 --rc genhtml_branch_coverage=1 00:06:22.489 --rc genhtml_function_coverage=1 00:06:22.489 --rc genhtml_legend=1 00:06:22.489 --rc geninfo_all_blocks=1 00:06:22.489 --rc geninfo_unexecuted_blocks=1 00:06:22.489 00:06:22.489 ' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.489 --rc genhtml_branch_coverage=1 00:06:22.489 --rc genhtml_function_coverage=1 00:06:22.489 --rc genhtml_legend=1 00:06:22.489 --rc geninfo_all_blocks=1 00:06:22.489 --rc geninfo_unexecuted_blocks=1 00:06:22.489 00:06:22.489 ' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.489 --rc genhtml_branch_coverage=1 00:06:22.489 --rc genhtml_function_coverage=1 00:06:22.489 --rc genhtml_legend=1 00:06:22.489 --rc geninfo_all_blocks=1 00:06:22.489 --rc geninfo_unexecuted_blocks=1 00:06:22.489 00:06:22.489 ' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.489 --rc genhtml_branch_coverage=1 00:06:22.489 --rc genhtml_function_coverage=1 00:06:22.489 --rc genhtml_legend=1 00:06:22.489 --rc geninfo_all_blocks=1 00:06:22.489 --rc geninfo_unexecuted_blocks=1 00:06:22.489 00:06:22.489 ' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:22.489 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:22.490 Cannot find device "nvmf_init_br" 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:22.490 Cannot find device "nvmf_init_br2" 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:22.490 Cannot find device "nvmf_tgt_br" 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:22.490 Cannot find device "nvmf_tgt_br2" 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:22.490 Cannot find device "nvmf_init_br" 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:06:22.490 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:22.749 Cannot find device "nvmf_init_br2" 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:22.749 Cannot find device "nvmf_tgt_br" 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:22.749 Cannot find device "nvmf_tgt_br2" 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:22.749 Cannot find device "nvmf_br" 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:22.749 Cannot find device "nvmf_init_if" 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:22.749 Cannot find device "nvmf_init_if2" 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:22.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:22.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:22.749 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:22.749 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:23.007 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:23.007 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:23.007 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:23.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:23.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:06:23.008 00:06:23.008 --- 10.0.0.3 ping statistics --- 00:06:23.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.008 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:23.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:23.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:06:23.008 00:06:23.008 --- 10.0.0.4 ping statistics --- 00:06:23.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.008 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:23.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:06:23.008 00:06:23.008 --- 10.0.0.1 ping statistics --- 00:06:23.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.008 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:23.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:06:23.008 00:06:23.008 --- 10.0.0.2 ping statistics --- 00:06:23.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.008 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=63784 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 63784 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 63784 ']' 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.008 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.266 [2024-12-16 11:26:28.452715] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:23.266 [2024-12-16 11:26:28.452833] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.266 [2024-12-16 11:26:28.607231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.266 [2024-12-16 11:26:28.667383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.266 [2024-12-16 11:26:28.667466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.266 [2024-12-16 11:26:28.667491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.266 [2024-12-16 11:26:28.667502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.266 [2024-12-16 11:26:28.667511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.267 [2024-12-16 11:26:28.668853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.267 [2024-12-16 11:26:28.668954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.267 [2024-12-16 11:26:28.668965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.525 [2024-12-16 11:26:28.861433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.525 Malloc0 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.525 Delay0 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.525 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.784 [2024-12-16 11:26:28.939539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:23.784 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.784 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:23.784 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.784 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.784 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.784 11:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:23.784 [2024-12-16 11:26:29.139864] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:26.315 Initializing NVMe Controllers 00:06:26.315 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:26.315 controller IO queue size 128 less than required 00:06:26.315 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:26.315 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:26.315 Initialization complete. Launching workers. 00:06:26.315 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26737 00:06:26.315 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26798, failed to submit 62 00:06:26.315 success 26741, unsuccessful 57, failed 0 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:26.315 rmmod nvme_tcp 00:06:26.315 rmmod nvme_fabrics 00:06:26.315 rmmod nvme_keyring 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 63784 ']' 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 63784 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 63784 ']' 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 63784 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63784 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:26.315 killing process with pid 63784 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63784' 00:06:26.315 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 63784 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 63784 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:26.316 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:06:26.574 00:06:26.574 real 0m4.155s 00:06:26.574 user 0m10.543s 00:06:26.574 sys 0m1.151s 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.574 ************************************ 00:06:26.574 END TEST nvmf_abort 00:06:26.574 ************************************ 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.574 ************************************ 00:06:26.574 START TEST nvmf_ns_hotplug_stress 00:06:26.574 ************************************ 00:06:26.574 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:26.575 * Looking for test storage... 00:06:26.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:26.575 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.575 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.575 11:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.834 --rc genhtml_branch_coverage=1 00:06:26.834 --rc genhtml_function_coverage=1 00:06:26.834 --rc genhtml_legend=1 00:06:26.834 --rc geninfo_all_blocks=1 00:06:26.834 --rc geninfo_unexecuted_blocks=1 00:06:26.834 00:06:26.834 ' 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.834 --rc genhtml_branch_coverage=1 00:06:26.834 --rc genhtml_function_coverage=1 00:06:26.834 --rc genhtml_legend=1 00:06:26.834 --rc geninfo_all_blocks=1 00:06:26.834 --rc geninfo_unexecuted_blocks=1 00:06:26.834 00:06:26.834 ' 00:06:26.834 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.834 --rc genhtml_branch_coverage=1 00:06:26.835 --rc genhtml_function_coverage=1 00:06:26.835 --rc genhtml_legend=1 00:06:26.835 --rc geninfo_all_blocks=1 00:06:26.835 --rc geninfo_unexecuted_blocks=1 00:06:26.835 00:06:26.835 ' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.835 --rc genhtml_branch_coverage=1 00:06:26.835 --rc genhtml_function_coverage=1 00:06:26.835 --rc genhtml_legend=1 00:06:26.835 --rc geninfo_all_blocks=1 00:06:26.835 --rc geninfo_unexecuted_blocks=1 00:06:26.835 00:06:26.835 ' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:26.835 Cannot find device "nvmf_init_br" 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:26.835 Cannot find device "nvmf_init_br2" 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:26.835 Cannot find device "nvmf_tgt_br" 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:26.835 Cannot find device "nvmf_tgt_br2" 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:26.835 Cannot find device "nvmf_init_br" 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:06:26.835 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:26.835 Cannot find device "nvmf_init_br2" 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:26.836 Cannot find device "nvmf_tgt_br" 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:26.836 Cannot find device "nvmf_tgt_br2" 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:26.836 Cannot find device "nvmf_br" 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:26.836 Cannot find device "nvmf_init_if" 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:26.836 Cannot find device "nvmf_init_if2" 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:26.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:26.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:26.836 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:27.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:27.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:06:27.095 00:06:27.095 --- 10.0.0.3 ping statistics --- 00:06:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.095 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:27.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:27.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:06:27.095 00:06:27.095 --- 10.0.0.4 ping statistics --- 00:06:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.095 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:27.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:06:27.095 00:06:27.095 --- 10.0.0.1 ping statistics --- 00:06:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.095 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:27.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:06:27.095 00:06:27.095 --- 10.0.0.2 ping statistics --- 00:06:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.095 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.095 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=64062 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 64062 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 64062 ']' 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.354 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.354 [2024-12-16 11:26:32.571934] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:27.354 [2024-12-16 11:26:32.572061] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.354 [2024-12-16 11:26:32.723387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.612 [2024-12-16 11:26:32.782143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.612 [2024-12-16 11:26:32.782207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.612 [2024-12-16 11:26:32.782218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.612 [2024-12-16 11:26:32.782227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.612 [2024-12-16 11:26:32.782234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.612 [2024-12-16 11:26:32.783470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.612 [2024-12-16 11:26:32.783584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.612 [2024-12-16 11:26:32.783588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:27.613 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:27.871 [2024-12-16 11:26:33.240951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.871 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:28.438 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:28.438 [2024-12-16 11:26:33.843507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:28.697 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:28.955 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:29.214 Malloc0 00:06:29.214 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:29.473 Delay0 00:06:29.473 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.732 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:29.994 NULL1 00:06:29.994 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:30.253 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=64185 00:06:30.253 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:30.253 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:30.253 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.630 Read completed with error (sct=0, sc=11) 00:06:31.630 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.630 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:31.630 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:31.888 true 00:06:31.888 11:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:31.888 11:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.824 11:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.083 11:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:33.083 11:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:33.083 true 00:06:33.341 11:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:33.341 11:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.600 11:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.858 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:33.858 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:34.117 true 00:06:34.117 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:34.117 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.376 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.634 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:34.634 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:34.891 true 00:06:34.891 11:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:34.891 11:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.150 11:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.716 11:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:35.716 11:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:35.716 true 00:06:35.716 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:35.716 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.650 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.908 11:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:36.908 11:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:37.167 true 00:06:37.167 11:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:37.167 11:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.425 11:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.684 11:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:37.684 11:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:37.942 true 00:06:37.942 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:37.942 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.201 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.459 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:38.459 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:38.717 true 00:06:38.717 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:38.717 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.652 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.909 11:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:39.909 11:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:40.167 true 00:06:40.167 11:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:40.167 11:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.425 11:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.991 11:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:40.991 11:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:40.991 true 00:06:40.991 11:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:40.991 11:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.557 11:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.557 11:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:41.557 11:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:41.816 true 00:06:42.075 11:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:42.075 11:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.643 11:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.902 11:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:42.902 11:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:43.161 true 00:06:43.161 11:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:43.161 11:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.729 11:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.988 11:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:43.988 11:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:44.246 true 00:06:44.246 11:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:44.246 11:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.505 11:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.763 11:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:44.763 11:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:45.331 true 00:06:45.331 11:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:45.331 11:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.331 11:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.590 11:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:45.590 11:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:45.849 true 00:06:46.108 11:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:46.108 11:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.366 11:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.625 11:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:46.625 11:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:46.884 true 00:06:46.884 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:46.884 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.820 11:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.079 11:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:48.079 11:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:48.337 true 00:06:48.337 11:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:48.337 11:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.596 11:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.854 11:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:48.854 11:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:49.112 true 00:06:49.112 11:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:49.112 11:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.371 11:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.629 11:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:49.629 11:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:49.891 true 00:06:49.891 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:49.891 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.828 11:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.086 11:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:51.086 11:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:51.344 true 00:06:51.344 11:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:51.344 11:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.603 11:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.862 11:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:51.862 11:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:52.120 true 00:06:52.120 11:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:52.120 11:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.378 11:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.637 11:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:52.637 11:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:52.895 true 00:06:52.895 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:52.895 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.829 11:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.122 11:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:54.122 11:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:54.392 true 00:06:54.392 11:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:54.392 11:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.651 11:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.909 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:54.909 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:55.167 true 00:06:55.167 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:55.167 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.425 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.683 11:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:55.683 11:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:55.941 true 00:06:55.941 11:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:55.941 11:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.198 11:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.456 11:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:56.456 11:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:56.714 true 00:06:56.973 11:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:56.973 11:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.907 11:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.166 11:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:58.166 11:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:58.424 true 00:06:58.424 11:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:58.425 11:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.683 11:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.941 11:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:58.941 11:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:59.199 true 00:06:59.199 11:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:06:59.199 11:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.457 11:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.716 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:59.716 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:00.294 true 00:07:00.294 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:07:00.294 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.294 Initializing NVMe Controllers 00:07:00.294 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:00.294 Controller IO queue size 128, less than required. 00:07:00.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.294 Controller IO queue size 128, less than required. 00:07:00.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.294 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:00.294 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:00.294 Initialization complete. Launching workers. 00:07:00.294 ======================================================== 00:07:00.294 Latency(us) 00:07:00.294 Device Information : IOPS MiB/s Average min max 00:07:00.294 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 306.59 0.15 137497.99 3567.23 1064200.35 00:07:00.294 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6705.89 3.27 19087.03 2833.47 639472.02 00:07:00.294 ======================================================== 00:07:00.294 Total : 7012.48 3.42 24264.00 2833.47 1064200.35 00:07:00.294 00:07:00.294 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.876 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:00.876 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:00.876 true 00:07:00.876 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64185 00:07:00.876 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (64185) - No such process 00:07:00.876 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 64185 00:07:00.876 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.135 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.702 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:01.702 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:01.702 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:01.702 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:01.702 11:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:01.703 null0 00:07:01.703 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:01.703 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:01.703 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:01.961 null1 00:07:01.961 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:01.961 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:01.961 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:02.220 null2 00:07:02.220 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.220 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.220 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:02.788 null3 00:07:02.788 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.788 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.788 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:02.788 null4 00:07:02.788 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.788 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.788 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:03.046 null5 00:07:03.046 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.046 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.046 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:03.304 null6 00:07:03.304 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.305 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.305 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:03.563 null7 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.563 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.564 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.822 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:03.822 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:03.822 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.823 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.823 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.823 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 65255 65256 65258 65259 65260 65263 65266 65267 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.081 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.340 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.599 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.599 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.599 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.599 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.599 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.599 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.599 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.857 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.116 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.374 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.632 11:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.632 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.889 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.889 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.890 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.890 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.890 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.890 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.890 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.148 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.406 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.406 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.406 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.406 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.406 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.406 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.406 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.665 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.665 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.665 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.665 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.665 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.665 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.665 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.924 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.182 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.441 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.700 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.700 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.700 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.700 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.700 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.700 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.700 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.700 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.959 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.218 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.218 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.218 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.218 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.218 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.218 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.218 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.476 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.735 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.735 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.735 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.735 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.735 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.735 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.735 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.735 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.735 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.735 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.735 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.994 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.253 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.770 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.770 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.770 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.770 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.770 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.770 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.770 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.770 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.770 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.028 rmmod nvme_tcp 00:07:10.028 rmmod nvme_fabrics 00:07:10.028 rmmod nvme_keyring 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 64062 ']' 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 64062 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 64062 ']' 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 64062 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64062 00:07:10.028 killing process with pid 64062 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.028 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64062' 00:07:10.029 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 64062 00:07:10.029 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 64062 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:10.287 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:10.547 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:10.547 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:07:10.548 ************************************ 00:07:10.548 END TEST nvmf_ns_hotplug_stress 00:07:10.548 ************************************ 00:07:10.548 00:07:10.548 real 0m43.918s 00:07:10.548 user 3m36.139s 00:07:10.548 sys 0m12.778s 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.548 ************************************ 00:07:10.548 START TEST nvmf_delete_subsystem 00:07:10.548 ************************************ 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:10.548 * Looking for test storage... 00:07:10.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.548 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.816 --rc genhtml_branch_coverage=1 00:07:10.816 --rc genhtml_function_coverage=1 00:07:10.816 --rc genhtml_legend=1 00:07:10.816 --rc geninfo_all_blocks=1 00:07:10.816 --rc geninfo_unexecuted_blocks=1 00:07:10.816 00:07:10.816 ' 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.816 --rc genhtml_branch_coverage=1 00:07:10.816 --rc genhtml_function_coverage=1 00:07:10.816 --rc genhtml_legend=1 00:07:10.816 --rc geninfo_all_blocks=1 00:07:10.816 --rc geninfo_unexecuted_blocks=1 00:07:10.816 00:07:10.816 ' 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.816 --rc genhtml_branch_coverage=1 00:07:10.816 --rc genhtml_function_coverage=1 00:07:10.816 --rc genhtml_legend=1 00:07:10.816 --rc geninfo_all_blocks=1 00:07:10.816 --rc geninfo_unexecuted_blocks=1 00:07:10.816 00:07:10.816 ' 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.816 --rc genhtml_branch_coverage=1 00:07:10.816 --rc genhtml_function_coverage=1 00:07:10.816 --rc genhtml_legend=1 00:07:10.816 --rc geninfo_all_blocks=1 00:07:10.816 --rc geninfo_unexecuted_blocks=1 00:07:10.816 00:07:10.816 ' 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:10.816 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.816 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.816 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:10.817 Cannot find device "nvmf_init_br" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:10.817 Cannot find device "nvmf_init_br2" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:10.817 Cannot find device "nvmf_tgt_br" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:10.817 Cannot find device "nvmf_tgt_br2" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:10.817 Cannot find device "nvmf_init_br" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:10.817 Cannot find device "nvmf_init_br2" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:10.817 Cannot find device "nvmf_tgt_br" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:10.817 Cannot find device "nvmf_tgt_br2" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:10.817 Cannot find device "nvmf_br" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:10.817 Cannot find device "nvmf_init_if" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:10.817 Cannot find device "nvmf_init_if2" 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:10.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:10.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:10.817 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:11.114 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:11.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:11.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:07:11.114 00:07:11.114 --- 10.0.0.3 ping statistics --- 00:07:11.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.114 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:11.115 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:11.115 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:07:11.115 00:07:11.115 --- 10.0.0.4 ping statistics --- 00:07:11.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.115 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:11.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:11.115 00:07:11.115 --- 10.0.0.1 ping statistics --- 00:07:11.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.115 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:11.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:11.115 00:07:11.115 --- 10.0.0.2 ping statistics --- 00:07:11.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.115 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=66672 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 66672 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 66672 ']' 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.115 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.115 [2024-12-16 11:27:16.464518] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:11.115 [2024-12-16 11:27:16.464630] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.373 [2024-12-16 11:27:16.617568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.373 [2024-12-16 11:27:16.668478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.373 [2024-12-16 11:27:16.668549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.373 [2024-12-16 11:27:16.668569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.373 [2024-12-16 11:27:16.668580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.373 [2024-12-16 11:27:16.668588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.373 [2024-12-16 11:27:16.669835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.373 [2024-12-16 11:27:16.669848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 [2024-12-16 11:27:16.859135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 [2024-12-16 11:27:16.875733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 NULL1 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 Delay0 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.632 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=66710 00:07:11.633 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:11.633 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:11.890 [2024-12-16 11:27:17.079813] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:13.792 11:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.792 11:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.792 11:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 [2024-12-16 11:27:19.112322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192ec30 is same with the state(6) to be set 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 starting I/O failed: -6 00:07:13.792 Read completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.792 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 starting I/O failed: -6 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 starting I/O failed: -6 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 starting I/O failed: -6 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 starting I/O failed: -6 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 starting I/O failed: -6 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 starting I/O failed: -6 00:07:13.793 [2024-12-16 11:27:19.114657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f90d8000c80 is same with the state(6) to be set 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:13.793 Write completed with error (sct=0, sc=8) 00:07:13.793 Read completed with error (sct=0, sc=8) 00:07:14.726 [2024-12-16 11:27:20.092588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1923aa0 is same with the state(6) to be set 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 [2024-12-16 11:27:20.112354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192ea50 is same with the state(6) to be set 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Write completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.726 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 [2024-12-16 11:27:20.112602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1931ea0 is same with the state(6) to be set 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 [2024-12-16 11:27:20.114608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f90d800d060 is same with the state(6) to be set 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 Write completed with error (sct=0, sc=8) 00:07:14.727 Read completed with error (sct=0, sc=8) 00:07:14.727 [2024-12-16 11:27:20.114929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f90d800d840 is same with the state(6) to be set 00:07:14.727 Initializing NVMe Controllers 00:07:14.727 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:14.727 Controller IO queue size 128, less than required. 00:07:14.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:14.727 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:14.727 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:14.727 Initialization complete. Launching workers. 00:07:14.727 ======================================================== 00:07:14.727 Latency(us) 00:07:14.727 Device Information : IOPS MiB/s Average min max 00:07:14.727 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.70 0.09 884072.01 546.23 1007927.03 00:07:14.727 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.26 0.08 942864.56 1124.51 2002439.99 00:07:14.727 ======================================================== 00:07:14.727 Total : 334.96 0.16 912201.58 546.23 2002439.99 00:07:14.727 00:07:14.727 [2024-12-16 11:27:20.116323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1923aa0 (9): Bad file descriptor 00:07:14.727 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:14.727 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.727 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:14.727 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66710 00:07:14.727 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66710 00:07:15.294 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (66710) - No such process 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 66710 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 66710 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 66710 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.294 [2024-12-16 11:27:20.645426] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=66755 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:15.294 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.552 [2024-12-16 11:27:20.833176] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:15.810 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.810 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:15.810 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.377 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.377 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:16.377 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.944 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.944 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:16.944 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.510 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.510 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:17.510 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.078 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.078 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:18.078 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.336 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.336 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:18.336 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.595 Initializing NVMe Controllers 00:07:18.595 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:18.595 Controller IO queue size 128, less than required. 00:07:18.595 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.595 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:18.595 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:18.595 Initialization complete. Launching workers. 00:07:18.595 ======================================================== 00:07:18.595 Latency(us) 00:07:18.595 Device Information : IOPS MiB/s Average min max 00:07:18.595 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003350.52 1000129.24 1011419.56 00:07:18.595 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005548.30 1000228.04 1042664.84 00:07:18.595 ======================================================== 00:07:18.595 Total : 256.00 0.12 1004449.41 1000129.24 1042664.84 00:07:18.595 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66755 00:07:18.870 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (66755) - No such process 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 66755 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.870 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.870 rmmod nvme_tcp 00:07:18.870 rmmod nvme_fabrics 00:07:18.870 rmmod nvme_keyring 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 66672 ']' 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 66672 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 66672 ']' 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 66672 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66672 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.142 killing process with pid 66672 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66672' 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 66672 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 66672 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:19.142 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:07:19.401 00:07:19.401 real 0m8.970s 00:07:19.401 user 0m27.496s 00:07:19.401 sys 0m1.629s 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.401 ************************************ 00:07:19.401 END TEST nvmf_delete_subsystem 00:07:19.401 ************************************ 00:07:19.401 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.661 ************************************ 00:07:19.661 START TEST nvmf_host_management 00:07:19.661 ************************************ 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:19.661 * Looking for test storage... 00:07:19.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.661 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.661 --rc genhtml_branch_coverage=1 00:07:19.661 --rc genhtml_function_coverage=1 00:07:19.661 --rc genhtml_legend=1 00:07:19.661 --rc geninfo_all_blocks=1 00:07:19.661 --rc geninfo_unexecuted_blocks=1 00:07:19.661 00:07:19.661 ' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.661 --rc genhtml_branch_coverage=1 00:07:19.661 --rc genhtml_function_coverage=1 00:07:19.661 --rc genhtml_legend=1 00:07:19.661 --rc geninfo_all_blocks=1 00:07:19.661 --rc geninfo_unexecuted_blocks=1 00:07:19.661 00:07:19.661 ' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.661 --rc genhtml_branch_coverage=1 00:07:19.661 --rc genhtml_function_coverage=1 00:07:19.661 --rc genhtml_legend=1 00:07:19.661 --rc geninfo_all_blocks=1 00:07:19.661 --rc geninfo_unexecuted_blocks=1 00:07:19.661 00:07:19.661 ' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.661 --rc genhtml_branch_coverage=1 00:07:19.661 --rc genhtml_function_coverage=1 00:07:19.661 --rc genhtml_legend=1 00:07:19.661 --rc geninfo_all_blocks=1 00:07:19.661 --rc geninfo_unexecuted_blocks=1 00:07:19.661 00:07:19.661 ' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.661 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:19.662 Cannot find device "nvmf_init_br" 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:19.662 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:19.919 Cannot find device "nvmf_init_br2" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:19.919 Cannot find device "nvmf_tgt_br" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.919 Cannot find device "nvmf_tgt_br2" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:19.919 Cannot find device "nvmf_init_br" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:19.919 Cannot find device "nvmf_init_br2" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:19.919 Cannot find device "nvmf_tgt_br" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:19.919 Cannot find device "nvmf_tgt_br2" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:19.919 Cannot find device "nvmf_br" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:19.919 Cannot find device "nvmf_init_if" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:19.919 Cannot find device "nvmf_init_if2" 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:19.919 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:19.920 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:20.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:20.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:20.178 00:07:20.178 --- 10.0.0.3 ping statistics --- 00:07:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.178 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:20.178 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:20.178 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:07:20.178 00:07:20.178 --- 10.0.0.4 ping statistics --- 00:07:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.178 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:20.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:20.178 00:07:20.178 --- 10.0.0.1 ping statistics --- 00:07:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.178 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:20.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:07:20.178 00:07:20.178 --- 10.0.0.2 ping statistics --- 00:07:20.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.178 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.178 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=67047 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 67047 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 67047 ']' 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.179 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.179 [2024-12-16 11:27:25.528741] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:20.179 [2024-12-16 11:27:25.529346] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.437 [2024-12-16 11:27:25.682240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.437 [2024-12-16 11:27:25.746400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.437 [2024-12-16 11:27:25.746461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.437 [2024-12-16 11:27:25.746486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.437 [2024-12-16 11:27:25.746508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.437 [2024-12-16 11:27:25.746528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.437 [2024-12-16 11:27:25.747928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.437 [2024-12-16 11:27:25.748069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.437 [2024-12-16 11:27:25.748178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.437 [2024-12-16 11:27:25.748184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.696 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.696 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:20.696 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.696 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.696 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.696 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.696 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 [2024-12-16 11:27:25.931837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.697 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 Malloc0 00:07:20.697 [2024-12-16 11:27:26.008866] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67106 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67106 /var/tmp/bdevperf.sock 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 67106 ']' 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:20.697 { 00:07:20.697 "params": { 00:07:20.697 "name": "Nvme$subsystem", 00:07:20.697 "trtype": "$TEST_TRANSPORT", 00:07:20.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.697 "adrfam": "ipv4", 00:07:20.697 "trsvcid": "$NVMF_PORT", 00:07:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.697 "hdgst": ${hdgst:-false}, 00:07:20.697 "ddgst": ${ddgst:-false} 00:07:20.697 }, 00:07:20.697 "method": "bdev_nvme_attach_controller" 00:07:20.697 } 00:07:20.697 EOF 00:07:20.697 )") 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:20.697 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:20.697 "params": { 00:07:20.697 "name": "Nvme0", 00:07:20.697 "trtype": "tcp", 00:07:20.697 "traddr": "10.0.0.3", 00:07:20.697 "adrfam": "ipv4", 00:07:20.697 "trsvcid": "4420", 00:07:20.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:20.697 "hdgst": false, 00:07:20.697 "ddgst": false 00:07:20.697 }, 00:07:20.697 "method": "bdev_nvme_attach_controller" 00:07:20.697 }' 00:07:20.956 [2024-12-16 11:27:26.117205] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:20.956 [2024-12-16 11:27:26.117301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67106 ] 00:07:20.956 [2024-12-16 11:27:26.270805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.956 [2024-12-16 11:27:26.329961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.215 Running I/O for 10 seconds... 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.473 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:21.473 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:21.473 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.734 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.734 [2024-12-16 11:27:26.979371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.979989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.979999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.980010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.734 [2024-12-16 11:27:26.980019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.734 [2024-12-16 11:27:26.980044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.735 [2024-12-16 11:27:26.980834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.735 [2024-12-16 11:27:26.980845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae52a0 is same with the state(6) to be set 00:07:21.735 [2024-12-16 11:27:26.982159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:21.735 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.735 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:21.735 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.735 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.735 task offset: 89984 on job bdev=Nvme0n1 fails 00:07:21.735 00:07:21.736 Latency(us) 00:07:21.736 [2024-12-16T11:27:27.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.736 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:21.736 Job: Nvme0n1 ended in about 0.46 seconds with error 00:07:21.736 Verification LBA range: start 0x0 length 0x400 00:07:21.736 Nvme0n1 : 0.46 1402.09 87.63 140.21 0.00 39846.06 6315.29 44564.48 00:07:21.736 [2024-12-16T11:27:27.150Z] =================================================================================================================== 00:07:21.736 [2024-12-16T11:27:27.150Z] Total : 1402.09 87.63 140.21 0.00 39846.06 6315.29 44564.48 00:07:21.736 [2024-12-16 11:27:26.984695] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.736 [2024-12-16 11:27:26.984743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aed130 (9): Bad file descriptor 00:07:21.736 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.736 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:21.736 [2024-12-16 11:27:26.991710] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67106 00:07:22.671 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67106) - No such process 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:22.671 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:22.671 { 00:07:22.671 "params": { 00:07:22.671 "name": "Nvme$subsystem", 00:07:22.671 "trtype": "$TEST_TRANSPORT", 00:07:22.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.671 "adrfam": "ipv4", 00:07:22.671 "trsvcid": "$NVMF_PORT", 00:07:22.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.672 "hdgst": ${hdgst:-false}, 00:07:22.672 "ddgst": ${ddgst:-false} 00:07:22.672 }, 00:07:22.672 "method": "bdev_nvme_attach_controller" 00:07:22.672 } 00:07:22.672 EOF 00:07:22.672 )") 00:07:22.672 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:22.672 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:22.672 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:22.672 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:22.672 "params": { 00:07:22.672 "name": "Nvme0", 00:07:22.672 "trtype": "tcp", 00:07:22.672 "traddr": "10.0.0.3", 00:07:22.672 "adrfam": "ipv4", 00:07:22.672 "trsvcid": "4420", 00:07:22.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:22.672 "hdgst": false, 00:07:22.672 "ddgst": false 00:07:22.672 }, 00:07:22.672 "method": "bdev_nvme_attach_controller" 00:07:22.672 }' 00:07:22.672 [2024-12-16 11:27:28.056664] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:22.672 [2024-12-16 11:27:28.057273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67152 ] 00:07:22.930 [2024-12-16 11:27:28.224224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.930 [2024-12-16 11:27:28.304570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.189 Running I/O for 1 seconds... 00:07:24.124 1472.00 IOPS, 92.00 MiB/s 00:07:24.124 Latency(us) 00:07:24.124 [2024-12-16T11:27:29.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.124 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:24.124 Verification LBA range: start 0x0 length 0x400 00:07:24.124 Nvme0n1 : 1.00 1530.81 95.68 0.00 0.00 40981.98 7536.64 36938.47 00:07:24.124 [2024-12-16T11:27:29.538Z] =================================================================================================================== 00:07:24.124 [2024-12-16T11:27:29.538Z] Total : 1530.81 95.68 0.00 0.00 40981.98 7536.64 36938.47 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.382 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.382 rmmod nvme_tcp 00:07:24.641 rmmod nvme_fabrics 00:07:24.641 rmmod nvme_keyring 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 67047 ']' 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 67047 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 67047 ']' 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 67047 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67047 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.641 killing process with pid 67047 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67047' 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 67047 00:07:24.641 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 67047 00:07:24.899 [2024-12-16 11:27:30.057288] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:24.899 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.900 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.900 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:25.162 00:07:25.162 real 0m5.503s 00:07:25.162 user 0m19.782s 00:07:25.162 sys 0m1.493s 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.162 ************************************ 00:07:25.162 END TEST nvmf_host_management 00:07:25.162 ************************************ 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.162 ************************************ 00:07:25.162 START TEST nvmf_lvol 00:07:25.162 ************************************ 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:25.162 * Looking for test storage... 00:07:25.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.162 --rc genhtml_branch_coverage=1 00:07:25.162 --rc genhtml_function_coverage=1 00:07:25.162 --rc genhtml_legend=1 00:07:25.162 --rc geninfo_all_blocks=1 00:07:25.162 --rc geninfo_unexecuted_blocks=1 00:07:25.162 00:07:25.162 ' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.162 --rc genhtml_branch_coverage=1 00:07:25.162 --rc genhtml_function_coverage=1 00:07:25.162 --rc genhtml_legend=1 00:07:25.162 --rc geninfo_all_blocks=1 00:07:25.162 --rc geninfo_unexecuted_blocks=1 00:07:25.162 00:07:25.162 ' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.162 --rc genhtml_branch_coverage=1 00:07:25.162 --rc genhtml_function_coverage=1 00:07:25.162 --rc genhtml_legend=1 00:07:25.162 --rc geninfo_all_blocks=1 00:07:25.162 --rc geninfo_unexecuted_blocks=1 00:07:25.162 00:07:25.162 ' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.162 --rc genhtml_branch_coverage=1 00:07:25.162 --rc genhtml_function_coverage=1 00:07:25.162 --rc genhtml_legend=1 00:07:25.162 --rc geninfo_all_blocks=1 00:07:25.162 --rc geninfo_unexecuted_blocks=1 00:07:25.162 00:07:25.162 ' 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.162 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.426 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.427 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:25.427 Cannot find device "nvmf_init_br" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:25.427 Cannot find device "nvmf_init_br2" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:25.427 Cannot find device "nvmf_tgt_br" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.427 Cannot find device "nvmf_tgt_br2" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:25.427 Cannot find device "nvmf_init_br" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:25.427 Cannot find device "nvmf_init_br2" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:25.427 Cannot find device "nvmf_tgt_br" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:25.427 Cannot find device "nvmf_tgt_br2" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:25.427 Cannot find device "nvmf_br" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:25.427 Cannot find device "nvmf_init_if" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:25.427 Cannot find device "nvmf_init_if2" 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.427 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.428 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:25.687 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:25.687 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:07:25.687 00:07:25.687 --- 10.0.0.3 ping statistics --- 00:07:25.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.687 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:25.687 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:25.687 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:07:25.687 00:07:25.687 --- 10.0.0.4 ping statistics --- 00:07:25.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.687 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:25.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:25.687 00:07:25.687 --- 10.0.0.1 ping statistics --- 00:07:25.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.687 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:25.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:07:25.687 00:07:25.687 --- 10.0.0.2 ping statistics --- 00:07:25.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.687 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.687 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=67412 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 67412 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 67412 ']' 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.687 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.687 [2024-12-16 11:27:31.088238] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:25.687 [2024-12-16 11:27:31.088348] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.946 [2024-12-16 11:27:31.243691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.946 [2024-12-16 11:27:31.316768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.946 [2024-12-16 11:27:31.316885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.946 [2024-12-16 11:27:31.316939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.946 [2024-12-16 11:27:31.316953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.946 [2024-12-16 11:27:31.316965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.946 [2024-12-16 11:27:31.318407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.946 [2024-12-16 11:27:31.318575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.946 [2024-12-16 11:27:31.318588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.206 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.206 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:26.206 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.206 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.206 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.206 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.206 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:26.465 [2024-12-16 11:27:31.776532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.465 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:26.724 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:26.724 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:26.982 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:26.982 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:27.549 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:27.808 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8ff7a8be-c353-4610-b2ee-3989c959bb6b 00:07:27.808 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8ff7a8be-c353-4610-b2ee-3989c959bb6b lvol 20 00:07:28.067 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3fd3e725-497f-4e04-8395-cf9145c0e9f8 00:07:28.067 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.325 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3fd3e725-497f-4e04-8395-cf9145c0e9f8 00:07:28.583 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:28.842 [2024-12-16 11:27:34.172918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:28.842 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:29.100 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:29.100 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67552 00:07:29.100 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:30.486 11:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 3fd3e725-497f-4e04-8395-cf9145c0e9f8 MY_SNAPSHOT 00:07:30.486 11:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=31ec5655-b2f4-45c9-8b5c-7fad0cafc514 00:07:30.486 11:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 3fd3e725-497f-4e04-8395-cf9145c0e9f8 30 00:07:31.051 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 31ec5655-b2f4-45c9-8b5c-7fad0cafc514 MY_CLONE 00:07:31.309 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=443749c9-5a12-4ab4-a2d6-3b6d768851bf 00:07:31.309 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 443749c9-5a12-4ab4-a2d6-3b6d768851bf 00:07:32.245 11:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67552 00:07:40.473 Initializing NVMe Controllers 00:07:40.473 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:40.473 Controller IO queue size 128, less than required. 00:07:40.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.473 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:40.473 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:40.473 Initialization complete. Launching workers. 00:07:40.473 ======================================================== 00:07:40.473 Latency(us) 00:07:40.473 Device Information : IOPS MiB/s Average min max 00:07:40.473 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10464.70 40.88 12241.03 2457.16 58956.06 00:07:40.473 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10442.80 40.79 12261.14 2398.55 49564.44 00:07:40.473 ======================================================== 00:07:40.473 Total : 20907.50 81.67 12251.07 2398.55 58956.06 00:07:40.473 00:07:40.473 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3fd3e725-497f-4e04-8395-cf9145c0e9f8 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ff7a8be-c353-4610-b2ee-3989c959bb6b 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.473 rmmod nvme_tcp 00:07:40.473 rmmod nvme_fabrics 00:07:40.473 rmmod nvme_keyring 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 67412 ']' 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 67412 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 67412 ']' 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 67412 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67412 00:07:40.473 killing process with pid 67412 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67412' 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 67412 00:07:40.473 11:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 67412 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:40.731 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:40.990 00:07:40.990 real 0m15.925s 00:07:40.990 user 1m5.797s 00:07:40.990 sys 0m4.020s 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.990 ************************************ 00:07:40.990 END TEST nvmf_lvol 00:07:40.990 ************************************ 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.990 ************************************ 00:07:40.990 START TEST nvmf_lvs_grow 00:07:40.990 ************************************ 00:07:40.990 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:41.249 * Looking for test storage... 00:07:41.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.249 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.249 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.249 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.249 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.249 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.250 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:41.250 Cannot find device "nvmf_init_br" 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:41.250 Cannot find device "nvmf_init_br2" 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:41.250 Cannot find device "nvmf_tgt_br" 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:41.250 Cannot find device "nvmf_tgt_br2" 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:41.250 Cannot find device "nvmf_init_br" 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:41.250 Cannot find device "nvmf_init_br2" 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:41.250 Cannot find device "nvmf_tgt_br" 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:41.250 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:41.509 Cannot find device "nvmf_tgt_br2" 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:41.509 Cannot find device "nvmf_br" 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:41.509 Cannot find device "nvmf_init_if" 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:41.509 Cannot find device "nvmf_init_if2" 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:41.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:41.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:41.509 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:41.510 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:41.768 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:41.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:41.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:07:41.768 00:07:41.768 --- 10.0.0.3 ping statistics --- 00:07:41.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.768 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:07:41.768 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:41.768 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:41.768 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:07:41.768 00:07:41.768 --- 10.0.0.4 ping statistics --- 00:07:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.769 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:41.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:07:41.769 00:07:41.769 --- 10.0.0.1 ping statistics --- 00:07:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.769 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:41.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:07:41.769 00:07:41.769 --- 10.0.0.2 ping statistics --- 00:07:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.769 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=67971 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 67971 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 67971 ']' 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:41.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.769 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.769 [2024-12-16 11:27:47.026305] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:41.769 [2024-12-16 11:27:47.026414] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.769 [2024-12-16 11:27:47.176516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.027 [2024-12-16 11:27:47.232714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.027 [2024-12-16 11:27:47.232806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.027 [2024-12-16 11:27:47.232847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.027 [2024-12-16 11:27:47.232863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.027 [2024-12-16 11:27:47.232884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.027 [2024-12-16 11:27:47.233483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.027 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.027 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:42.027 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.027 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.027 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.027 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.027 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.285 [2024-12-16 11:27:47.692251] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.544 ************************************ 00:07:42.544 START TEST lvs_grow_clean 00:07:42.544 ************************************ 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.544 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.801 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.801 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:43.059 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:43.059 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:43.059 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:43.316 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:43.317 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:43.317 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6b34015b-d21b-4da9-af9f-2e581013d30b lvol 150 00:07:43.887 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb54fbc1-de88-4fd6-87b3-80f56ff3ba87 00:07:43.887 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:43.887 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:44.145 [2024-12-16 11:27:49.304970] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:44.145 [2024-12-16 11:27:49.305079] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:44.145 true 00:07:44.145 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:44.145 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:44.403 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:44.403 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.661 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb54fbc1-de88-4fd6-87b3-80f56ff3ba87 00:07:44.919 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:45.177 [2024-12-16 11:27:50.449621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:45.177 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68129 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68129 /var/tmp/bdevperf.sock 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 68129 ']' 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.435 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:45.435 [2024-12-16 11:27:50.802641] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:45.435 [2024-12-16 11:27:50.803202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68129 ] 00:07:45.693 [2024-12-16 11:27:50.949326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.693 [2024-12-16 11:27:51.010890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.952 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.952 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:45.952 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:46.210 Nvme0n1 00:07:46.210 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:46.468 [ 00:07:46.468 { 00:07:46.468 "aliases": [ 00:07:46.468 "cb54fbc1-de88-4fd6-87b3-80f56ff3ba87" 00:07:46.468 ], 00:07:46.468 "assigned_rate_limits": { 00:07:46.468 "r_mbytes_per_sec": 0, 00:07:46.468 "rw_ios_per_sec": 0, 00:07:46.468 "rw_mbytes_per_sec": 0, 00:07:46.468 "w_mbytes_per_sec": 0 00:07:46.468 }, 00:07:46.468 "block_size": 4096, 00:07:46.468 "claimed": false, 00:07:46.468 "driver_specific": { 00:07:46.468 "mp_policy": "active_passive", 00:07:46.468 "nvme": [ 00:07:46.468 { 00:07:46.468 "ctrlr_data": { 00:07:46.468 "ana_reporting": false, 00:07:46.468 "cntlid": 1, 00:07:46.468 "firmware_revision": "25.01", 00:07:46.468 "model_number": "SPDK bdev Controller", 00:07:46.468 "multi_ctrlr": true, 00:07:46.468 "oacs": { 00:07:46.468 "firmware": 0, 00:07:46.468 "format": 0, 00:07:46.468 "ns_manage": 0, 00:07:46.468 "security": 0 00:07:46.468 }, 00:07:46.468 "serial_number": "SPDK0", 00:07:46.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.468 "vendor_id": "0x8086" 00:07:46.468 }, 00:07:46.468 "ns_data": { 00:07:46.468 "can_share": true, 00:07:46.468 "id": 1 00:07:46.468 }, 00:07:46.468 "trid": { 00:07:46.468 "adrfam": "IPv4", 00:07:46.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.468 "traddr": "10.0.0.3", 00:07:46.468 "trsvcid": "4420", 00:07:46.468 "trtype": "TCP" 00:07:46.468 }, 00:07:46.469 "vs": { 00:07:46.469 "nvme_version": "1.3" 00:07:46.469 } 00:07:46.469 } 00:07:46.469 ] 00:07:46.469 }, 00:07:46.469 "memory_domains": [ 00:07:46.469 { 00:07:46.469 "dma_device_id": "system", 00:07:46.469 "dma_device_type": 1 00:07:46.469 } 00:07:46.469 ], 00:07:46.469 "name": "Nvme0n1", 00:07:46.469 "num_blocks": 38912, 00:07:46.469 "numa_id": -1, 00:07:46.469 "product_name": "NVMe disk", 00:07:46.469 "supported_io_types": { 00:07:46.469 "abort": true, 00:07:46.469 "compare": true, 00:07:46.469 "compare_and_write": true, 00:07:46.469 "copy": true, 00:07:46.469 "flush": true, 00:07:46.469 "get_zone_info": false, 00:07:46.469 "nvme_admin": true, 00:07:46.469 "nvme_io": true, 00:07:46.469 "nvme_io_md": false, 00:07:46.469 "nvme_iov_md": false, 00:07:46.469 "read": true, 00:07:46.469 "reset": true, 00:07:46.469 "seek_data": false, 00:07:46.469 "seek_hole": false, 00:07:46.469 "unmap": true, 00:07:46.469 "write": true, 00:07:46.469 "write_zeroes": true, 00:07:46.469 "zcopy": false, 00:07:46.469 "zone_append": false, 00:07:46.469 "zone_management": false 00:07:46.469 }, 00:07:46.469 "uuid": "cb54fbc1-de88-4fd6-87b3-80f56ff3ba87", 00:07:46.469 "zoned": false 00:07:46.469 } 00:07:46.469 ] 00:07:46.469 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:46.469 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68159 00:07:46.469 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:46.469 Running I/O for 10 seconds... 00:07:47.847 Latency(us) 00:07:47.847 [2024-12-16T11:27:53.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.847 Nvme0n1 : 1.00 8183.00 31.96 0.00 0.00 0.00 0.00 0.00 00:07:47.847 [2024-12-16T11:27:53.261Z] =================================================================================================================== 00:07:47.847 [2024-12-16T11:27:53.261Z] Total : 8183.00 31.96 0.00 0.00 0.00 0.00 0.00 00:07:47.847 00:07:48.413 11:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:48.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.671 Nvme0n1 : 2.00 8105.50 31.66 0.00 0.00 0.00 0.00 0.00 00:07:48.671 [2024-12-16T11:27:54.085Z] =================================================================================================================== 00:07:48.671 [2024-12-16T11:27:54.085Z] Total : 8105.50 31.66 0.00 0.00 0.00 0.00 0.00 00:07:48.671 00:07:48.929 true 00:07:48.929 11:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:48.929 11:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:49.187 11:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:49.187 11:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:49.187 11:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68159 00:07:49.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.445 Nvme0n1 : 3.00 8089.67 31.60 0.00 0.00 0.00 0.00 0.00 00:07:49.445 [2024-12-16T11:27:54.859Z] =================================================================================================================== 00:07:49.445 [2024-12-16T11:27:54.859Z] Total : 8089.67 31.60 0.00 0.00 0.00 0.00 0.00 00:07:49.445 00:07:50.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.821 Nvme0n1 : 4.00 8061.25 31.49 0.00 0.00 0.00 0.00 0.00 00:07:50.821 [2024-12-16T11:27:56.235Z] =================================================================================================================== 00:07:50.821 [2024-12-16T11:27:56.235Z] Total : 8061.25 31.49 0.00 0.00 0.00 0.00 0.00 00:07:50.821 00:07:51.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.766 Nvme0n1 : 5.00 7890.60 30.82 0.00 0.00 0.00 0.00 0.00 00:07:51.766 [2024-12-16T11:27:57.180Z] =================================================================================================================== 00:07:51.766 [2024-12-16T11:27:57.180Z] Total : 7890.60 30.82 0.00 0.00 0.00 0.00 0.00 00:07:51.766 00:07:52.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.729 Nvme0n1 : 6.00 7878.50 30.78 0.00 0.00 0.00 0.00 0.00 00:07:52.729 [2024-12-16T11:27:58.143Z] =================================================================================================================== 00:07:52.729 [2024-12-16T11:27:58.143Z] Total : 7878.50 30.78 0.00 0.00 0.00 0.00 0.00 00:07:52.729 00:07:53.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.665 Nvme0n1 : 7.00 7803.57 30.48 0.00 0.00 0.00 0.00 0.00 00:07:53.665 [2024-12-16T11:27:59.079Z] =================================================================================================================== 00:07:53.665 [2024-12-16T11:27:59.079Z] Total : 7803.57 30.48 0.00 0.00 0.00 0.00 0.00 00:07:53.665 00:07:54.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.601 Nvme0n1 : 8.00 7786.12 30.41 0.00 0.00 0.00 0.00 0.00 00:07:54.601 [2024-12-16T11:28:00.015Z] =================================================================================================================== 00:07:54.601 [2024-12-16T11:28:00.015Z] Total : 7786.12 30.41 0.00 0.00 0.00 0.00 0.00 00:07:54.601 00:07:55.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.533 Nvme0n1 : 9.00 7770.22 30.35 0.00 0.00 0.00 0.00 0.00 00:07:55.533 [2024-12-16T11:28:00.947Z] =================================================================================================================== 00:07:55.533 [2024-12-16T11:28:00.947Z] Total : 7770.22 30.35 0.00 0.00 0.00 0.00 0.00 00:07:55.533 00:07:56.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.467 Nvme0n1 : 10.00 7772.30 30.36 0.00 0.00 0.00 0.00 0.00 00:07:56.467 [2024-12-16T11:28:01.881Z] =================================================================================================================== 00:07:56.467 [2024-12-16T11:28:01.881Z] Total : 7772.30 30.36 0.00 0.00 0.00 0.00 0.00 00:07:56.467 00:07:56.467 00:07:56.467 Latency(us) 00:07:56.467 [2024-12-16T11:28:01.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.467 Nvme0n1 : 10.02 7770.62 30.35 0.00 0.00 16460.14 5659.93 101044.60 00:07:56.467 [2024-12-16T11:28:01.881Z] =================================================================================================================== 00:07:56.467 [2024-12-16T11:28:01.881Z] Total : 7770.62 30.35 0.00 0.00 16460.14 5659.93 101044.60 00:07:56.467 { 00:07:56.467 "results": [ 00:07:56.467 { 00:07:56.467 "job": "Nvme0n1", 00:07:56.467 "core_mask": "0x2", 00:07:56.467 "workload": "randwrite", 00:07:56.467 "status": "finished", 00:07:56.467 "queue_depth": 128, 00:07:56.467 "io_size": 4096, 00:07:56.467 "runtime": 10.018637, 00:07:56.467 "iops": 7770.617899420849, 00:07:56.467 "mibps": 30.353976169612693, 00:07:56.467 "io_failed": 0, 00:07:56.467 "io_timeout": 0, 00:07:56.467 "avg_latency_us": 16460.14391781036, 00:07:56.467 "min_latency_us": 5659.927272727273, 00:07:56.467 "max_latency_us": 101044.59636363636 00:07:56.467 } 00:07:56.467 ], 00:07:56.467 "core_count": 1 00:07:56.467 } 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68129 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 68129 ']' 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 68129 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68129 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.725 killing process with pid 68129 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68129' 00:07:56.725 Received shutdown signal, test time was about 10.000000 seconds 00:07:56.725 00:07:56.725 Latency(us) 00:07:56.725 [2024-12-16T11:28:02.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.725 [2024-12-16T11:28:02.139Z] =================================================================================================================== 00:07:56.725 [2024-12-16T11:28:02.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 68129 00:07:56.725 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 68129 00:07:56.725 11:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:57.292 11:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:57.551 11:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:57.551 11:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:57.809 11:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:57.809 11:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:57.809 11:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:57.809 [2024-12-16 11:28:03.202636] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:58.067 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:58.331 2024/12/16 11:28:03 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6b34015b-d21b-4da9-af9f-2e581013d30b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:58.331 request: 00:07:58.331 { 00:07:58.331 "method": "bdev_lvol_get_lvstores", 00:07:58.332 "params": { 00:07:58.332 "uuid": "6b34015b-d21b-4da9-af9f-2e581013d30b" 00:07:58.332 } 00:07:58.332 } 00:07:58.332 Got JSON-RPC error response 00:07:58.332 GoRPCClient: error on JSON-RPC call 00:07:58.332 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:58.332 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.332 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.332 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.332 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.604 aio_bdev 00:07:58.604 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb54fbc1-de88-4fd6-87b3-80f56ff3ba87 00:07:58.604 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cb54fbc1-de88-4fd6-87b3-80f56ff3ba87 00:07:58.604 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.604 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:58.604 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.604 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.604 11:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:58.861 11:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cb54fbc1-de88-4fd6-87b3-80f56ff3ba87 -t 2000 00:07:59.120 [ 00:07:59.120 { 00:07:59.120 "aliases": [ 00:07:59.120 "lvs/lvol" 00:07:59.120 ], 00:07:59.120 "assigned_rate_limits": { 00:07:59.120 "r_mbytes_per_sec": 0, 00:07:59.120 "rw_ios_per_sec": 0, 00:07:59.120 "rw_mbytes_per_sec": 0, 00:07:59.120 "w_mbytes_per_sec": 0 00:07:59.120 }, 00:07:59.120 "block_size": 4096, 00:07:59.120 "claimed": false, 00:07:59.120 "driver_specific": { 00:07:59.120 "lvol": { 00:07:59.120 "base_bdev": "aio_bdev", 00:07:59.120 "clone": false, 00:07:59.120 "esnap_clone": false, 00:07:59.120 "lvol_store_uuid": "6b34015b-d21b-4da9-af9f-2e581013d30b", 00:07:59.120 "num_allocated_clusters": 38, 00:07:59.120 "snapshot": false, 00:07:59.120 "thin_provision": false 00:07:59.120 } 00:07:59.120 }, 00:07:59.120 "name": "cb54fbc1-de88-4fd6-87b3-80f56ff3ba87", 00:07:59.120 "num_blocks": 38912, 00:07:59.120 "product_name": "Logical Volume", 00:07:59.120 "supported_io_types": { 00:07:59.120 "abort": false, 00:07:59.120 "compare": false, 00:07:59.120 "compare_and_write": false, 00:07:59.120 "copy": false, 00:07:59.120 "flush": false, 00:07:59.120 "get_zone_info": false, 00:07:59.120 "nvme_admin": false, 00:07:59.120 "nvme_io": false, 00:07:59.120 "nvme_io_md": false, 00:07:59.120 "nvme_iov_md": false, 00:07:59.120 "read": true, 00:07:59.120 "reset": true, 00:07:59.120 "seek_data": true, 00:07:59.120 "seek_hole": true, 00:07:59.120 "unmap": true, 00:07:59.120 "write": true, 00:07:59.120 "write_zeroes": true, 00:07:59.120 "zcopy": false, 00:07:59.120 "zone_append": false, 00:07:59.120 "zone_management": false 00:07:59.120 }, 00:07:59.120 "uuid": "cb54fbc1-de88-4fd6-87b3-80f56ff3ba87", 00:07:59.120 "zoned": false 00:07:59.120 } 00:07:59.120 ] 00:07:59.120 11:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:59.120 11:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:59.120 11:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:59.379 11:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:59.379 11:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:07:59.379 11:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:59.946 11:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:59.946 11:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cb54fbc1-de88-4fd6-87b3-80f56ff3ba87 00:08:00.204 11:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b34015b-d21b-4da9-af9f-2e581013d30b 00:08:00.462 11:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:00.720 11:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:00.979 ************************************ 00:08:00.979 END TEST lvs_grow_clean 00:08:00.979 ************************************ 00:08:00.979 00:08:00.979 real 0m18.618s 00:08:00.979 user 0m17.904s 00:08:00.979 sys 0m2.244s 00:08:00.979 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.979 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:00.979 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:00.979 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.979 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.979 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.236 ************************************ 00:08:01.236 START TEST lvs_grow_dirty 00:08:01.237 ************************************ 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:01.237 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.494 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:01.494 11:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:01.752 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:01.752 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:01.752 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:02.010 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:02.010 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:02.010 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be346be3-6eac-40fb-95af-dbe3dbc40c1f lvol 150 00:08:02.268 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ffd21544-2583-4155-aa16-fcd2d22e7069 00:08:02.268 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:02.268 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:02.835 [2024-12-16 11:28:07.949000] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:02.835 [2024-12-16 11:28:07.949121] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:02.835 true 00:08:02.835 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:02.835 11:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:03.093 11:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:03.093 11:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.352 11:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ffd21544-2583-4155-aa16-fcd2d22e7069 00:08:03.611 11:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:03.869 [2024-12-16 11:28:09.073619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:03.869 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:04.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68568 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68568 /var/tmp/bdevperf.sock 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 68568 ']' 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.128 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.128 [2024-12-16 11:28:09.408580] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:04.128 [2024-12-16 11:28:09.408698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68568 ] 00:08:04.387 [2024-12-16 11:28:09.549115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.387 [2024-12-16 11:28:09.597568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.387 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.387 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:04.387 11:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:04.953 Nvme0n1 00:08:04.953 11:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:05.211 [ 00:08:05.211 { 00:08:05.211 "aliases": [ 00:08:05.211 "ffd21544-2583-4155-aa16-fcd2d22e7069" 00:08:05.211 ], 00:08:05.211 "assigned_rate_limits": { 00:08:05.211 "r_mbytes_per_sec": 0, 00:08:05.211 "rw_ios_per_sec": 0, 00:08:05.211 "rw_mbytes_per_sec": 0, 00:08:05.211 "w_mbytes_per_sec": 0 00:08:05.211 }, 00:08:05.211 "block_size": 4096, 00:08:05.211 "claimed": false, 00:08:05.211 "driver_specific": { 00:08:05.211 "mp_policy": "active_passive", 00:08:05.211 "nvme": [ 00:08:05.211 { 00:08:05.211 "ctrlr_data": { 00:08:05.211 "ana_reporting": false, 00:08:05.211 "cntlid": 1, 00:08:05.211 "firmware_revision": "25.01", 00:08:05.211 "model_number": "SPDK bdev Controller", 00:08:05.211 "multi_ctrlr": true, 00:08:05.211 "oacs": { 00:08:05.211 "firmware": 0, 00:08:05.211 "format": 0, 00:08:05.211 "ns_manage": 0, 00:08:05.211 "security": 0 00:08:05.211 }, 00:08:05.211 "serial_number": "SPDK0", 00:08:05.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:05.211 "vendor_id": "0x8086" 00:08:05.211 }, 00:08:05.211 "ns_data": { 00:08:05.211 "can_share": true, 00:08:05.211 "id": 1 00:08:05.211 }, 00:08:05.211 "trid": { 00:08:05.211 "adrfam": "IPv4", 00:08:05.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:05.211 "traddr": "10.0.0.3", 00:08:05.211 "trsvcid": "4420", 00:08:05.211 "trtype": "TCP" 00:08:05.211 }, 00:08:05.211 "vs": { 00:08:05.211 "nvme_version": "1.3" 00:08:05.211 } 00:08:05.211 } 00:08:05.211 ] 00:08:05.211 }, 00:08:05.211 "memory_domains": [ 00:08:05.211 { 00:08:05.211 "dma_device_id": "system", 00:08:05.211 "dma_device_type": 1 00:08:05.211 } 00:08:05.211 ], 00:08:05.211 "name": "Nvme0n1", 00:08:05.211 "num_blocks": 38912, 00:08:05.211 "numa_id": -1, 00:08:05.211 "product_name": "NVMe disk", 00:08:05.211 "supported_io_types": { 00:08:05.211 "abort": true, 00:08:05.211 "compare": true, 00:08:05.211 "compare_and_write": true, 00:08:05.211 "copy": true, 00:08:05.211 "flush": true, 00:08:05.211 "get_zone_info": false, 00:08:05.211 "nvme_admin": true, 00:08:05.211 "nvme_io": true, 00:08:05.211 "nvme_io_md": false, 00:08:05.211 "nvme_iov_md": false, 00:08:05.211 "read": true, 00:08:05.211 "reset": true, 00:08:05.211 "seek_data": false, 00:08:05.211 "seek_hole": false, 00:08:05.211 "unmap": true, 00:08:05.211 "write": true, 00:08:05.211 "write_zeroes": true, 00:08:05.211 "zcopy": false, 00:08:05.211 "zone_append": false, 00:08:05.211 "zone_management": false 00:08:05.211 }, 00:08:05.211 "uuid": "ffd21544-2583-4155-aa16-fcd2d22e7069", 00:08:05.211 "zoned": false 00:08:05.211 } 00:08:05.211 ] 00:08:05.211 11:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68602 00:08:05.211 11:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:05.211 11:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:05.211 Running I/O for 10 seconds... 00:08:06.146 Latency(us) 00:08:06.146 [2024-12-16T11:28:11.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.146 Nvme0n1 : 1.00 7974.00 31.15 0.00 0.00 0.00 0.00 0.00 00:08:06.146 [2024-12-16T11:28:11.560Z] =================================================================================================================== 00:08:06.146 [2024-12-16T11:28:11.560Z] Total : 7974.00 31.15 0.00 0.00 0.00 0.00 0.00 00:08:06.146 00:08:07.081 11:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:07.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.339 Nvme0n1 : 2.00 7907.00 30.89 0.00 0.00 0.00 0.00 0.00 00:08:07.339 [2024-12-16T11:28:12.753Z] =================================================================================================================== 00:08:07.339 [2024-12-16T11:28:12.753Z] Total : 7907.00 30.89 0.00 0.00 0.00 0.00 0.00 00:08:07.339 00:08:07.597 true 00:08:07.597 11:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:07.597 11:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:07.856 11:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:07.856 11:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:07.856 11:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68602 00:08:08.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.114 Nvme0n1 : 3.00 7937.67 31.01 0.00 0.00 0.00 0.00 0.00 00:08:08.114 [2024-12-16T11:28:13.528Z] =================================================================================================================== 00:08:08.114 [2024-12-16T11:28:13.528Z] Total : 7937.67 31.01 0.00 0.00 0.00 0.00 0.00 00:08:08.114 00:08:09.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.489 Nvme0n1 : 4.00 7874.25 30.76 0.00 0.00 0.00 0.00 0.00 00:08:09.489 [2024-12-16T11:28:14.903Z] =================================================================================================================== 00:08:09.489 [2024-12-16T11:28:14.903Z] Total : 7874.25 30.76 0.00 0.00 0.00 0.00 0.00 00:08:09.489 00:08:10.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.422 Nvme0n1 : 5.00 7823.40 30.56 0.00 0.00 0.00 0.00 0.00 00:08:10.422 [2024-12-16T11:28:15.836Z] =================================================================================================================== 00:08:10.422 [2024-12-16T11:28:15.836Z] Total : 7823.40 30.56 0.00 0.00 0.00 0.00 0.00 00:08:10.422 00:08:11.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.358 Nvme0n1 : 6.00 7829.67 30.58 0.00 0.00 0.00 0.00 0.00 00:08:11.358 [2024-12-16T11:28:16.772Z] =================================================================================================================== 00:08:11.358 [2024-12-16T11:28:16.772Z] Total : 7829.67 30.58 0.00 0.00 0.00 0.00 0.00 00:08:11.358 00:08:12.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.292 Nvme0n1 : 7.00 7535.14 29.43 0.00 0.00 0.00 0.00 0.00 00:08:12.292 [2024-12-16T11:28:17.706Z] =================================================================================================================== 00:08:12.292 [2024-12-16T11:28:17.707Z] Total : 7535.14 29.43 0.00 0.00 0.00 0.00 0.00 00:08:12.293 00:08:13.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.224 Nvme0n1 : 8.00 7515.00 29.36 0.00 0.00 0.00 0.00 0.00 00:08:13.224 [2024-12-16T11:28:18.638Z] =================================================================================================================== 00:08:13.224 [2024-12-16T11:28:18.638Z] Total : 7515.00 29.36 0.00 0.00 0.00 0.00 0.00 00:08:13.224 00:08:14.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.159 Nvme0n1 : 9.00 7462.56 29.15 0.00 0.00 0.00 0.00 0.00 00:08:14.159 [2024-12-16T11:28:19.573Z] =================================================================================================================== 00:08:14.159 [2024-12-16T11:28:19.573Z] Total : 7462.56 29.15 0.00 0.00 0.00 0.00 0.00 00:08:14.159 00:08:15.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.533 Nvme0n1 : 10.00 7451.70 29.11 0.00 0.00 0.00 0.00 0.00 00:08:15.533 [2024-12-16T11:28:20.947Z] =================================================================================================================== 00:08:15.533 [2024-12-16T11:28:20.947Z] Total : 7451.70 29.11 0.00 0.00 0.00 0.00 0.00 00:08:15.533 00:08:15.533 00:08:15.533 Latency(us) 00:08:15.533 [2024-12-16T11:28:20.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.533 Nvme0n1 : 10.01 7454.41 29.12 0.00 0.00 17159.78 7506.85 253564.74 00:08:15.533 [2024-12-16T11:28:20.947Z] =================================================================================================================== 00:08:15.533 [2024-12-16T11:28:20.947Z] Total : 7454.41 29.12 0.00 0.00 17159.78 7506.85 253564.74 00:08:15.533 { 00:08:15.533 "results": [ 00:08:15.533 { 00:08:15.533 "job": "Nvme0n1", 00:08:15.533 "core_mask": "0x2", 00:08:15.533 "workload": "randwrite", 00:08:15.533 "status": "finished", 00:08:15.533 "queue_depth": 128, 00:08:15.533 "io_size": 4096, 00:08:15.533 "runtime": 10.013539, 00:08:15.533 "iops": 7454.40747771592, 00:08:15.533 "mibps": 29.118779209827814, 00:08:15.533 "io_failed": 0, 00:08:15.533 "io_timeout": 0, 00:08:15.533 "avg_latency_us": 17159.78043953501, 00:08:15.533 "min_latency_us": 7506.850909090909, 00:08:15.533 "max_latency_us": 253564.74181818182 00:08:15.533 } 00:08:15.533 ], 00:08:15.533 "core_count": 1 00:08:15.533 } 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68568 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 68568 ']' 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 68568 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68568 00:08:15.533 killing process with pid 68568 00:08:15.533 Received shutdown signal, test time was about 10.000000 seconds 00:08:15.533 00:08:15.533 Latency(us) 00:08:15.533 [2024-12-16T11:28:20.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.533 [2024-12-16T11:28:20.947Z] =================================================================================================================== 00:08:15.533 [2024-12-16T11:28:20.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68568' 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 68568 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 68568 00:08:15.533 11:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:15.791 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.049 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:16.049 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 67971 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 67971 00:08:16.307 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 67971 Killed "${NVMF_APP[@]}" "$@" 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=68770 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 68770 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 68770 ']' 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.307 11:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.565 [2024-12-16 11:28:21.738379] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:16.565 [2024-12-16 11:28:21.738652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.565 [2024-12-16 11:28:21.883541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.565 [2024-12-16 11:28:21.937599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.565 [2024-12-16 11:28:21.937658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.565 [2024-12-16 11:28:21.937686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.565 [2024-12-16 11:28:21.937709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.565 [2024-12-16 11:28:21.937717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.565 [2024-12-16 11:28:21.938181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.823 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.823 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:16.823 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.823 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.823 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.823 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.823 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.081 [2024-12-16 11:28:22.362967] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:17.081 [2024-12-16 11:28:22.364184] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:17.081 [2024-12-16 11:28:22.364577] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ffd21544-2583-4155-aa16-fcd2d22e7069 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ffd21544-2583-4155-aa16-fcd2d22e7069 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.081 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.340 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ffd21544-2583-4155-aa16-fcd2d22e7069 -t 2000 00:08:17.598 [ 00:08:17.598 { 00:08:17.598 "aliases": [ 00:08:17.598 "lvs/lvol" 00:08:17.598 ], 00:08:17.598 "assigned_rate_limits": { 00:08:17.598 "r_mbytes_per_sec": 0, 00:08:17.598 "rw_ios_per_sec": 0, 00:08:17.598 "rw_mbytes_per_sec": 0, 00:08:17.598 "w_mbytes_per_sec": 0 00:08:17.598 }, 00:08:17.598 "block_size": 4096, 00:08:17.598 "claimed": false, 00:08:17.598 "driver_specific": { 00:08:17.598 "lvol": { 00:08:17.598 "base_bdev": "aio_bdev", 00:08:17.598 "clone": false, 00:08:17.598 "esnap_clone": false, 00:08:17.598 "lvol_store_uuid": "be346be3-6eac-40fb-95af-dbe3dbc40c1f", 00:08:17.598 "num_allocated_clusters": 38, 00:08:17.598 "snapshot": false, 00:08:17.598 "thin_provision": false 00:08:17.598 } 00:08:17.598 }, 00:08:17.598 "name": "ffd21544-2583-4155-aa16-fcd2d22e7069", 00:08:17.598 "num_blocks": 38912, 00:08:17.598 "product_name": "Logical Volume", 00:08:17.598 "supported_io_types": { 00:08:17.598 "abort": false, 00:08:17.598 "compare": false, 00:08:17.598 "compare_and_write": false, 00:08:17.598 "copy": false, 00:08:17.598 "flush": false, 00:08:17.598 "get_zone_info": false, 00:08:17.598 "nvme_admin": false, 00:08:17.598 "nvme_io": false, 00:08:17.598 "nvme_io_md": false, 00:08:17.598 "nvme_iov_md": false, 00:08:17.598 "read": true, 00:08:17.598 "reset": true, 00:08:17.598 "seek_data": true, 00:08:17.598 "seek_hole": true, 00:08:17.598 "unmap": true, 00:08:17.598 "write": true, 00:08:17.598 "write_zeroes": true, 00:08:17.598 "zcopy": false, 00:08:17.598 "zone_append": false, 00:08:17.598 "zone_management": false 00:08:17.598 }, 00:08:17.598 "uuid": "ffd21544-2583-4155-aa16-fcd2d22e7069", 00:08:17.598 "zoned": false 00:08:17.598 } 00:08:17.598 ] 00:08:17.598 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:17.598 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:17.598 11:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:17.857 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:17.857 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:17.857 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:18.424 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:18.424 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.424 [2024-12-16 11:28:23.812543] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:18.682 11:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:18.940 2024/12/16 11:28:24 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:be346be3-6eac-40fb-95af-dbe3dbc40c1f], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:18.940 request: 00:08:18.940 { 00:08:18.940 "method": "bdev_lvol_get_lvstores", 00:08:18.940 "params": { 00:08:18.940 "uuid": "be346be3-6eac-40fb-95af-dbe3dbc40c1f" 00:08:18.940 } 00:08:18.940 } 00:08:18.940 Got JSON-RPC error response 00:08:18.940 GoRPCClient: error on JSON-RPC call 00:08:18.940 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:18.940 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.940 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.941 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.941 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.199 aio_bdev 00:08:19.199 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ffd21544-2583-4155-aa16-fcd2d22e7069 00:08:19.199 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ffd21544-2583-4155-aa16-fcd2d22e7069 00:08:19.199 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.199 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:19.199 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.199 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.199 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:19.457 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ffd21544-2583-4155-aa16-fcd2d22e7069 -t 2000 00:08:19.715 [ 00:08:19.715 { 00:08:19.715 "aliases": [ 00:08:19.715 "lvs/lvol" 00:08:19.715 ], 00:08:19.715 "assigned_rate_limits": { 00:08:19.715 "r_mbytes_per_sec": 0, 00:08:19.715 "rw_ios_per_sec": 0, 00:08:19.715 "rw_mbytes_per_sec": 0, 00:08:19.715 "w_mbytes_per_sec": 0 00:08:19.715 }, 00:08:19.715 "block_size": 4096, 00:08:19.715 "claimed": false, 00:08:19.715 "driver_specific": { 00:08:19.715 "lvol": { 00:08:19.715 "base_bdev": "aio_bdev", 00:08:19.715 "clone": false, 00:08:19.715 "esnap_clone": false, 00:08:19.715 "lvol_store_uuid": "be346be3-6eac-40fb-95af-dbe3dbc40c1f", 00:08:19.715 "num_allocated_clusters": 38, 00:08:19.715 "snapshot": false, 00:08:19.715 "thin_provision": false 00:08:19.715 } 00:08:19.715 }, 00:08:19.715 "name": "ffd21544-2583-4155-aa16-fcd2d22e7069", 00:08:19.715 "num_blocks": 38912, 00:08:19.715 "product_name": "Logical Volume", 00:08:19.715 "supported_io_types": { 00:08:19.715 "abort": false, 00:08:19.715 "compare": false, 00:08:19.715 "compare_and_write": false, 00:08:19.715 "copy": false, 00:08:19.715 "flush": false, 00:08:19.715 "get_zone_info": false, 00:08:19.715 "nvme_admin": false, 00:08:19.715 "nvme_io": false, 00:08:19.715 "nvme_io_md": false, 00:08:19.715 "nvme_iov_md": false, 00:08:19.715 "read": true, 00:08:19.715 "reset": true, 00:08:19.715 "seek_data": true, 00:08:19.715 "seek_hole": true, 00:08:19.715 "unmap": true, 00:08:19.715 "write": true, 00:08:19.715 "write_zeroes": true, 00:08:19.715 "zcopy": false, 00:08:19.715 "zone_append": false, 00:08:19.715 "zone_management": false 00:08:19.715 }, 00:08:19.715 "uuid": "ffd21544-2583-4155-aa16-fcd2d22e7069", 00:08:19.715 "zoned": false 00:08:19.715 } 00:08:19.715 ] 00:08:19.715 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:19.715 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:19.715 11:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:19.973 11:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:19.973 11:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:19.973 11:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:20.231 11:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:20.231 11:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ffd21544-2583-4155-aa16-fcd2d22e7069 00:08:20.488 11:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be346be3-6eac-40fb-95af-dbe3dbc40c1f 00:08:21.054 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:21.311 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:21.569 00:08:21.569 real 0m20.468s 00:08:21.569 user 0m42.366s 00:08:21.569 sys 0m8.253s 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.569 ************************************ 00:08:21.569 END TEST lvs_grow_dirty 00:08:21.569 ************************************ 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:21.569 nvmf_trace.0 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.569 11:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.827 rmmod nvme_tcp 00:08:21.827 rmmod nvme_fabrics 00:08:21.827 rmmod nvme_keyring 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 68770 ']' 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 68770 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 68770 ']' 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 68770 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.827 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68770 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.085 killing process with pid 68770 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68770' 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 68770 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 68770 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:22.085 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.343 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:22.343 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:22.343 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:22.343 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:22.343 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:22.344 00:08:22.344 real 0m41.350s 00:08:22.344 user 1m6.582s 00:08:22.344 sys 0m11.341s 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.344 ************************************ 00:08:22.344 END TEST nvmf_lvs_grow 00:08:22.344 ************************************ 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.344 11:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.603 ************************************ 00:08:22.603 START TEST nvmf_bdev_io_wait 00:08:22.603 ************************************ 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:22.603 * Looking for test storage... 00:08:22.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.603 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.604 --rc genhtml_branch_coverage=1 00:08:22.604 --rc genhtml_function_coverage=1 00:08:22.604 --rc genhtml_legend=1 00:08:22.604 --rc geninfo_all_blocks=1 00:08:22.604 --rc geninfo_unexecuted_blocks=1 00:08:22.604 00:08:22.604 ' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.604 --rc genhtml_branch_coverage=1 00:08:22.604 --rc genhtml_function_coverage=1 00:08:22.604 --rc genhtml_legend=1 00:08:22.604 --rc geninfo_all_blocks=1 00:08:22.604 --rc geninfo_unexecuted_blocks=1 00:08:22.604 00:08:22.604 ' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.604 --rc genhtml_branch_coverage=1 00:08:22.604 --rc genhtml_function_coverage=1 00:08:22.604 --rc genhtml_legend=1 00:08:22.604 --rc geninfo_all_blocks=1 00:08:22.604 --rc geninfo_unexecuted_blocks=1 00:08:22.604 00:08:22.604 ' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.604 --rc genhtml_branch_coverage=1 00:08:22.604 --rc genhtml_function_coverage=1 00:08:22.604 --rc genhtml_legend=1 00:08:22.604 --rc geninfo_all_blocks=1 00:08:22.604 --rc geninfo_unexecuted_blocks=1 00:08:22.604 00:08:22.604 ' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.604 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:22.604 11:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:22.604 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:22.605 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:22.605 Cannot find device "nvmf_init_br" 00:08:22.605 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:22.605 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:22.863 Cannot find device "nvmf_init_br2" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:22.863 Cannot find device "nvmf_tgt_br" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.863 Cannot find device "nvmf_tgt_br2" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:22.863 Cannot find device "nvmf_init_br" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:22.863 Cannot find device "nvmf_init_br2" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:22.863 Cannot find device "nvmf_tgt_br" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:22.863 Cannot find device "nvmf_tgt_br2" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:22.863 Cannot find device "nvmf_br" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:22.863 Cannot find device "nvmf_init_if" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:22.863 Cannot find device "nvmf_init_if2" 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:22.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:22.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:22.863 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:23.124 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:23.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:23.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:23.125 00:08:23.125 --- 10.0.0.3 ping statistics --- 00:08:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.125 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:23.125 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:23.125 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:08:23.125 00:08:23.125 --- 10.0.0.4 ping statistics --- 00:08:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.125 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:23.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:08:23.125 00:08:23.125 --- 10.0.0.1 ping statistics --- 00:08:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.125 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:23.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:23.125 00:08:23.125 --- 10.0.0.2 ping statistics --- 00:08:23.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.125 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=69233 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 69233 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 69233 ']' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.125 11:28:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.125 [2024-12-16 11:28:28.503938] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:23.125 [2024-12-16 11:28:28.504087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.383 [2024-12-16 11:28:28.668159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.383 [2024-12-16 11:28:28.741776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.383 [2024-12-16 11:28:28.742083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.383 [2024-12-16 11:28:28.742190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.383 [2024-12-16 11:28:28.742292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.383 [2024-12-16 11:28:28.742368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.383 [2024-12-16 11:28:28.743741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.383 [2024-12-16 11:28:28.743790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.383 [2024-12-16 11:28:28.743919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.383 [2024-12-16 11:28:28.743924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.319 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.577 [2024-12-16 11:28:29.769185] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.577 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.577 Malloc0 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.578 [2024-12-16 11:28:29.827860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69286 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.578 { 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme$subsystem", 00:08:24.578 "trtype": "$TEST_TRANSPORT", 00:08:24.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "$NVMF_PORT", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.578 "hdgst": ${hdgst:-false}, 00:08:24.578 "ddgst": ${ddgst:-false} 00:08:24.578 }, 00:08:24.578 "method": "bdev_nvme_attach_controller" 00:08:24.578 } 00:08:24.578 EOF 00:08:24.578 )") 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69288 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.578 { 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme$subsystem", 00:08:24.578 "trtype": "$TEST_TRANSPORT", 00:08:24.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "$NVMF_PORT", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.578 "hdgst": ${hdgst:-false}, 00:08:24.578 "ddgst": ${ddgst:-false} 00:08:24.578 }, 00:08:24.578 "method": "bdev_nvme_attach_controller" 00:08:24.578 } 00:08:24.578 EOF 00:08:24.578 )") 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69291 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69295 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.578 { 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme$subsystem", 00:08:24.578 "trtype": "$TEST_TRANSPORT", 00:08:24.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "$NVMF_PORT", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.578 "hdgst": ${hdgst:-false}, 00:08:24.578 "ddgst": ${ddgst:-false} 00:08:24.578 }, 00:08:24.578 "method": "bdev_nvme_attach_controller" 00:08:24.578 } 00:08:24.578 EOF 00:08:24.578 )") 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme1", 00:08:24.578 "trtype": "tcp", 00:08:24.578 "traddr": "10.0.0.3", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "4420", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.578 "hdgst": false, 00:08:24.578 "ddgst": false 00:08:24.578 }, 00:08:24.578 "method": "bdev_nvme_attach_controller" 00:08:24.578 }' 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme1", 00:08:24.578 "trtype": "tcp", 00:08:24.578 "traddr": "10.0.0.3", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "4420", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.578 "hdgst": false, 00:08:24.578 "ddgst": false 00:08:24.578 }, 00:08:24.578 "method": "bdev_nvme_attach_controller" 00:08:24.578 }' 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.578 { 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme$subsystem", 00:08:24.578 "trtype": "$TEST_TRANSPORT", 00:08:24.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "$NVMF_PORT", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.578 "hdgst": ${hdgst:-false}, 00:08:24.578 "ddgst": ${ddgst:-false} 00:08:24.578 }, 00:08:24.578 "method": "bdev_nvme_attach_controller" 00:08:24.578 } 00:08:24.578 EOF 00:08:24.578 )") 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme1", 00:08:24.578 "trtype": "tcp", 00:08:24.578 "traddr": "10.0.0.3", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "4420", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.578 "hdgst": false, 00:08:24.578 "ddgst": false 00:08:24.578 }, 00:08:24.578 "method": "bdev_nvme_attach_controller" 00:08:24.578 }' 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:24.578 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.578 "params": { 00:08:24.578 "name": "Nvme1", 00:08:24.578 "trtype": "tcp", 00:08:24.578 "traddr": "10.0.0.3", 00:08:24.578 "adrfam": "ipv4", 00:08:24.578 "trsvcid": "4420", 00:08:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.578 "hdgst": false, 00:08:24.579 "ddgst": false 00:08:24.579 }, 00:08:24.579 "method": "bdev_nvme_attach_controller" 00:08:24.579 }' 00:08:24.579 [2024-12-16 11:28:29.902970] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:24.579 [2024-12-16 11:28:29.903113] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:24.579 [2024-12-16 11:28:29.922061] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:24.579 [2024-12-16 11:28:29.922147] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:24.579 11:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69286 00:08:24.579 [2024-12-16 11:28:29.928311] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:24.579 [2024-12-16 11:28:29.928376] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:24.579 [2024-12-16 11:28:29.931564] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:24.579 [2024-12-16 11:28:29.931649] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:24.836 [2024-12-16 11:28:30.132274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.836 [2024-12-16 11:28:30.204280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:24.836 [2024-12-16 11:28:30.210421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.095 [2024-12-16 11:28:30.273435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:25.095 [2024-12-16 11:28:30.376029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.095 [2024-12-16 11:28:30.395119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.095 Running I/O for 1 seconds... 00:08:25.095 Running I/O for 1 seconds... 00:08:25.095 [2024-12-16 11:28:30.461737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:25.095 [2024-12-16 11:28:30.464597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:25.353 Running I/O for 1 seconds... 00:08:25.353 Running I/O for 1 seconds... 00:08:26.289 9207.00 IOPS, 35.96 MiB/s 00:08:26.289 Latency(us) 00:08:26.289 [2024-12-16T11:28:31.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.289 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:26.289 Nvme1n1 : 1.01 9243.96 36.11 0.00 0.00 13773.66 6672.76 18111.77 00:08:26.289 [2024-12-16T11:28:31.703Z] =================================================================================================================== 00:08:26.289 [2024-12-16T11:28:31.703Z] Total : 9243.96 36.11 0.00 0.00 13773.66 6672.76 18111.77 00:08:26.289 169816.00 IOPS, 663.34 MiB/s 00:08:26.289 Latency(us) 00:08:26.289 [2024-12-16T11:28:31.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.289 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:26.289 Nvme1n1 : 1.00 169468.12 661.98 0.00 0.00 750.99 309.06 2025.66 00:08:26.289 [2024-12-16T11:28:31.703Z] =================================================================================================================== 00:08:26.289 [2024-12-16T11:28:31.703Z] Total : 169468.12 661.98 0.00 0.00 750.99 309.06 2025.66 00:08:26.289 7378.00 IOPS, 28.82 MiB/s [2024-12-16T11:28:31.703Z] 7671.00 IOPS, 29.96 MiB/s 00:08:26.289 Latency(us) 00:08:26.289 [2024-12-16T11:28:31.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.289 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:26.289 Nvme1n1 : 1.01 7439.62 29.06 0.00 0.00 17115.48 4617.31 25141.99 00:08:26.289 [2024-12-16T11:28:31.703Z] =================================================================================================================== 00:08:26.289 [2024-12-16T11:28:31.703Z] Total : 7439.62 29.06 0.00 0.00 17115.48 4617.31 25141.99 00:08:26.289 00:08:26.289 Latency(us) 00:08:26.289 [2024-12-16T11:28:31.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.289 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:26.289 Nvme1n1 : 1.01 7741.87 30.24 0.00 0.00 16459.60 6762.12 28120.90 00:08:26.289 [2024-12-16T11:28:31.703Z] =================================================================================================================== 00:08:26.289 [2024-12-16T11:28:31.703Z] Total : 7741.87 30.24 0.00 0.00 16459.60 6762.12 28120.90 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69288 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69291 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69295 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.548 rmmod nvme_tcp 00:08:26.548 rmmod nvme_fabrics 00:08:26.548 rmmod nvme_keyring 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 69233 ']' 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 69233 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 69233 ']' 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 69233 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.548 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69233 00:08:26.807 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.807 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.807 killing process with pid 69233 00:08:26.807 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69233' 00:08:26.807 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 69233 00:08:26.807 11:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 69233 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:26.807 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:27.065 00:08:27.065 real 0m4.707s 00:08:27.065 user 0m18.811s 00:08:27.065 sys 0m2.345s 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.065 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 ************************************ 00:08:27.065 END TEST nvmf_bdev_io_wait 00:08:27.065 ************************************ 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.325 ************************************ 00:08:27.325 START TEST nvmf_queue_depth 00:08:27.325 ************************************ 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:27.325 * Looking for test storage... 00:08:27.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.325 --rc genhtml_branch_coverage=1 00:08:27.325 --rc genhtml_function_coverage=1 00:08:27.325 --rc genhtml_legend=1 00:08:27.325 --rc geninfo_all_blocks=1 00:08:27.325 --rc geninfo_unexecuted_blocks=1 00:08:27.325 00:08:27.325 ' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.325 --rc genhtml_branch_coverage=1 00:08:27.325 --rc genhtml_function_coverage=1 00:08:27.325 --rc genhtml_legend=1 00:08:27.325 --rc geninfo_all_blocks=1 00:08:27.325 --rc geninfo_unexecuted_blocks=1 00:08:27.325 00:08:27.325 ' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.325 --rc genhtml_branch_coverage=1 00:08:27.325 --rc genhtml_function_coverage=1 00:08:27.325 --rc genhtml_legend=1 00:08:27.325 --rc geninfo_all_blocks=1 00:08:27.325 --rc geninfo_unexecuted_blocks=1 00:08:27.325 00:08:27.325 ' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.325 --rc genhtml_branch_coverage=1 00:08:27.325 --rc genhtml_function_coverage=1 00:08:27.325 --rc genhtml_legend=1 00:08:27.325 --rc geninfo_all_blocks=1 00:08:27.325 --rc geninfo_unexecuted_blocks=1 00:08:27.325 00:08:27.325 ' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.325 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.326 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:27.326 Cannot find device "nvmf_init_br" 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:27.326 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:27.585 Cannot find device "nvmf_init_br2" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:27.585 Cannot find device "nvmf_tgt_br" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.585 Cannot find device "nvmf_tgt_br2" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:27.585 Cannot find device "nvmf_init_br" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:27.585 Cannot find device "nvmf_init_br2" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:27.585 Cannot find device "nvmf_tgt_br" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:27.585 Cannot find device "nvmf_tgt_br2" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:27.585 Cannot find device "nvmf_br" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:27.585 Cannot find device "nvmf_init_if" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:27.585 Cannot find device "nvmf_init_if2" 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:27.585 11:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:27.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:27.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:27.844 00:08:27.844 --- 10.0.0.3 ping statistics --- 00:08:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.844 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:27.844 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:27.844 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:08:27.844 00:08:27.844 --- 10.0.0.4 ping statistics --- 00:08:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.844 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:27.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:27.844 00:08:27.844 --- 10.0.0.1 ping statistics --- 00:08:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.844 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:27.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:08:27.844 00:08:27.844 --- 10.0.0.2 ping statistics --- 00:08:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.844 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=69582 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 69582 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 69582 ']' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.844 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.844 [2024-12-16 11:28:33.178223] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:27.844 [2024-12-16 11:28:33.178384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.103 [2024-12-16 11:28:33.331178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.103 [2024-12-16 11:28:33.375257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.103 [2024-12-16 11:28:33.375326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.103 [2024-12-16 11:28:33.375336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.103 [2024-12-16 11:28:33.375344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.103 [2024-12-16 11:28:33.375350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.103 [2024-12-16 11:28:33.375745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.103 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.103 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:28.103 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.103 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.103 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 [2024-12-16 11:28:33.558480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 Malloc0 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 [2024-12-16 11:28:33.610881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69624 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69624 /var/tmp/bdevperf.sock 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 69624 ']' 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.363 11:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.363 [2024-12-16 11:28:33.678158] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:28.363 [2024-12-16 11:28:33.678277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69624 ] 00:08:28.621 [2024-12-16 11:28:33.817587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.621 [2024-12-16 11:28:33.882829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.556 11:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.556 11:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:29.556 11:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:29.556 11:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.556 11:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.556 NVMe0n1 00:08:29.556 11:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.556 11:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.556 Running I/O for 10 seconds... 00:08:31.877 8019.00 IOPS, 31.32 MiB/s [2024-12-16T11:28:38.226Z] 7680.00 IOPS, 30.00 MiB/s [2024-12-16T11:28:39.188Z] 7771.33 IOPS, 30.36 MiB/s [2024-12-16T11:28:40.123Z] 7682.00 IOPS, 30.01 MiB/s [2024-12-16T11:28:41.058Z] 7809.00 IOPS, 30.50 MiB/s [2024-12-16T11:28:41.994Z] 7978.33 IOPS, 31.17 MiB/s [2024-12-16T11:28:42.930Z] 8045.71 IOPS, 31.43 MiB/s [2024-12-16T11:28:44.305Z] 8110.38 IOPS, 31.68 MiB/s [2024-12-16T11:28:45.249Z] 8182.56 IOPS, 31.96 MiB/s [2024-12-16T11:28:45.249Z] 8192.60 IOPS, 32.00 MiB/s 00:08:39.835 Latency(us) 00:08:39.835 [2024-12-16T11:28:45.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.835 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:39.835 Verification LBA range: start 0x0 length 0x4000 00:08:39.835 NVMe0n1 : 10.06 8245.93 32.21 0.00 0.00 123676.97 9651.67 122969.37 00:08:39.835 [2024-12-16T11:28:45.249Z] =================================================================================================================== 00:08:39.835 [2024-12-16T11:28:45.249Z] Total : 8245.93 32.21 0.00 0.00 123676.97 9651.67 122969.37 00:08:39.835 { 00:08:39.835 "results": [ 00:08:39.835 { 00:08:39.835 "job": "NVMe0n1", 00:08:39.835 "core_mask": "0x1", 00:08:39.835 "workload": "verify", 00:08:39.835 "status": "finished", 00:08:39.835 "verify_range": { 00:08:39.835 "start": 0, 00:08:39.835 "length": 16384 00:08:39.836 }, 00:08:39.836 "queue_depth": 1024, 00:08:39.836 "io_size": 4096, 00:08:39.836 "runtime": 10.059509, 00:08:39.836 "iops": 8245.929299332602, 00:08:39.836 "mibps": 32.210661325517975, 00:08:39.836 "io_failed": 0, 00:08:39.836 "io_timeout": 0, 00:08:39.836 "avg_latency_us": 123676.97040714559, 00:08:39.836 "min_latency_us": 9651.665454545455, 00:08:39.836 "max_latency_us": 122969.36727272728 00:08:39.836 } 00:08:39.836 ], 00:08:39.836 "core_count": 1 00:08:39.836 } 00:08:39.836 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69624 00:08:39.836 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 69624 ']' 00:08:39.836 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 69624 00:08:39.836 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.836 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.836 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69624 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.836 killing process with pid 69624 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69624' 00:08:39.836 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.836 00:08:39.836 Latency(us) 00:08:39.836 [2024-12-16T11:28:45.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.836 [2024-12-16T11:28:45.250Z] =================================================================================================================== 00:08:39.836 [2024-12-16T11:28:45.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 69624 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 69624 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.836 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:40.095 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.095 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.096 rmmod nvme_tcp 00:08:40.096 rmmod nvme_fabrics 00:08:40.096 rmmod nvme_keyring 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 69582 ']' 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 69582 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 69582 ']' 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 69582 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69582 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:40.096 killing process with pid 69582 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69582' 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 69582 00:08:40.096 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 69582 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:40.354 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.613 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:40.613 00:08:40.614 real 0m13.383s 00:08:40.614 user 0m22.743s 00:08:40.614 sys 0m2.313s 00:08:40.614 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.614 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.614 ************************************ 00:08:40.614 END TEST nvmf_queue_depth 00:08:40.614 ************************************ 00:08:40.614 11:28:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:40.614 11:28:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.614 11:28:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.614 11:28:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.614 ************************************ 00:08:40.614 START TEST nvmf_target_multipath 00:08:40.614 ************************************ 00:08:40.614 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:40.874 * Looking for test storage... 00:08:40.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:40.874 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.875 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:40.875 Cannot find device "nvmf_init_br" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:40.875 Cannot find device "nvmf_init_br2" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:40.875 Cannot find device "nvmf_tgt_br" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:40.875 Cannot find device "nvmf_tgt_br2" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:40.875 Cannot find device "nvmf_init_br" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:40.875 Cannot find device "nvmf_init_br2" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:40.875 Cannot find device "nvmf_tgt_br" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:40.875 Cannot find device "nvmf_tgt_br2" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:40.875 Cannot find device "nvmf_br" 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:40.875 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:41.134 Cannot find device "nvmf_init_if" 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:41.134 Cannot find device "nvmf_init_if2" 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.134 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:41.135 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.135 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:08:41.135 00:08:41.135 --- 10.0.0.3 ping statistics --- 00:08:41.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.135 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:41.135 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:41.135 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:41.135 00:08:41.135 --- 10.0.0.4 ping statistics --- 00:08:41.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.135 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:41.135 00:08:41.135 --- 10.0.0.1 ping statistics --- 00:08:41.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.135 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:41.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:41.135 00:08:41.135 --- 10.0.0.2 ping statistics --- 00:08:41.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.135 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.135 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=70004 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 70004 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 70004 ']' 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.394 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:41.394 [2024-12-16 11:28:46.616883] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:41.394 [2024-12-16 11:28:46.617000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.394 [2024-12-16 11:28:46.766447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.652 [2024-12-16 11:28:46.837510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.653 [2024-12-16 11:28:46.837568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.653 [2024-12-16 11:28:46.837583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.653 [2024-12-16 11:28:46.837594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.653 [2024-12-16 11:28:46.837603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.653 [2024-12-16 11:28:46.838923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.653 [2024-12-16 11:28:46.839051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.653 [2024-12-16 11:28:46.839107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.653 [2024-12-16 11:28:46.839107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.588 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.588 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:42.588 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.588 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.588 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:42.588 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.588 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:42.588 [2024-12-16 11:28:47.989322] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.847 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:43.106 Malloc0 00:08:43.106 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:43.364 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.623 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:43.882 [2024-12-16 11:28:49.130124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:43.882 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:44.141 [2024-12-16 11:28:49.402489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:44.141 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:44.399 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:44.677 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.677 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:44.677 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.677 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:44.677 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.606 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=70154 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:46.607 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:46.607 [global] 00:08:46.607 thread=1 00:08:46.607 invalidate=1 00:08:46.607 rw=randrw 00:08:46.607 time_based=1 00:08:46.607 runtime=6 00:08:46.607 ioengine=libaio 00:08:46.607 direct=1 00:08:46.607 bs=4096 00:08:46.607 iodepth=128 00:08:46.607 norandommap=0 00:08:46.607 numjobs=1 00:08:46.607 00:08:46.607 verify_dump=1 00:08:46.607 verify_backlog=512 00:08:46.607 verify_state_save=0 00:08:46.607 do_verify=1 00:08:46.607 verify=crc32c-intel 00:08:46.607 [job0] 00:08:46.607 filename=/dev/nvme0n1 00:08:46.607 Could not set queue depth (nvme0n1) 00:08:46.866 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:46.866 fio-3.35 00:08:46.866 Starting 1 thread 00:08:47.802 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:48.061 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:48.320 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:49.257 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:49.257 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:49.257 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:49.257 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:49.516 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:49.774 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:50.751 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:50.751 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:50.751 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:50.751 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 70154 00:08:53.295 00:08:53.295 job0: (groupid=0, jobs=1): err= 0: pid=70175: Mon Dec 16 11:28:58 2024 00:08:53.295 read: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(241MiB/6005msec) 00:08:53.295 slat (usec): min=2, max=5507, avg=55.88, stdev=253.34 00:08:53.295 clat (usec): min=629, max=19720, avg=8508.70, stdev=1313.45 00:08:53.295 lat (usec): min=657, max=19742, avg=8564.58, stdev=1324.84 00:08:53.295 clat percentiles (usec): 00:08:53.295 | 1.00th=[ 5145], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7635], 00:08:53.295 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:08:53.295 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10683], 00:08:53.295 | 99.00th=[12649], 99.50th=[13173], 99.90th=[15533], 99.95th=[16909], 00:08:53.295 | 99.99th=[19530] 00:08:53.295 bw ( KiB/s): min= 4504, max=27680, per=50.94%, avg=20934.36, stdev=8034.02, samples=11 00:08:53.295 iops : min= 1126, max= 6920, avg=5233.55, stdev=2008.50, samples=11 00:08:53.295 write: IOPS=6173, BW=24.1MiB/s (25.3MB/s)(126MiB/5231msec); 0 zone resets 00:08:53.295 slat (usec): min=15, max=4816, avg=68.21, stdev=179.22 00:08:53.295 clat (usec): min=383, max=19400, avg=7319.04, stdev=1085.42 00:08:53.295 lat (usec): min=493, max=19424, avg=7387.25, stdev=1090.33 00:08:53.295 clat percentiles (usec): 00:08:53.295 | 1.00th=[ 4146], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 6652], 00:08:53.295 | 30.00th=[ 6915], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:08:53.295 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:08:53.295 | 99.00th=[10683], 99.50th=[11731], 99.90th=[13829], 99.95th=[14484], 00:08:53.295 | 99.99th=[18482] 00:08:53.295 bw ( KiB/s): min= 4792, max=26824, per=85.19%, avg=21038.91, stdev=7914.55, samples=11 00:08:53.295 iops : min= 1198, max= 6706, avg=5259.73, stdev=1978.64, samples=11 00:08:53.295 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:53.295 lat (msec) : 2=0.01%, 4=0.30%, 10=93.06%, 20=6.62% 00:08:53.295 cpu : usr=6.06%, sys=21.47%, ctx=6017, majf=0, minf=114 00:08:53.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:53.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.295 issued rwts: total=61696,32294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.295 00:08:53.295 Run status group 0 (all jobs): 00:08:53.295 READ: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=241MiB (253MB), run=6005-6005msec 00:08:53.295 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=126MiB (132MB), run=5231-5231msec 00:08:53.295 00:08:53.295 Disk stats (read/write): 00:08:53.295 nvme0n1: ios=60783/31685, merge=0/0, ticks=486934/217445, in_queue=704379, util=98.56% 00:08:53.295 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:53.295 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:53.554 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:53.554 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:08:53.555 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:54.488 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:54.488 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:54.488 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:54.488 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:54.488 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70302 00:08:54.488 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:54.488 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:54.488 [global] 00:08:54.488 thread=1 00:08:54.488 invalidate=1 00:08:54.488 rw=randrw 00:08:54.488 time_based=1 00:08:54.488 runtime=6 00:08:54.488 ioengine=libaio 00:08:54.488 direct=1 00:08:54.488 bs=4096 00:08:54.488 iodepth=128 00:08:54.488 norandommap=0 00:08:54.488 numjobs=1 00:08:54.488 00:08:54.488 verify_dump=1 00:08:54.488 verify_backlog=512 00:08:54.488 verify_state_save=0 00:08:54.488 do_verify=1 00:08:54.488 verify=crc32c-intel 00:08:54.488 [job0] 00:08:54.488 filename=/dev/nvme0n1 00:08:54.488 Could not set queue depth (nvme0n1) 00:08:54.745 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.745 fio-3.35 00:08:54.745 Starting 1 thread 00:08:55.677 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:55.935 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:56.192 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:57.125 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:57.125 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.125 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:57.125 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:57.382 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:57.948 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:58.881 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:58.881 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.881 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:58.881 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70302 00:09:00.788 00:09:00.788 job0: (groupid=0, jobs=1): err= 0: pid=70323: Mon Dec 16 11:29:06 2024 00:09:00.788 read: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(254MiB/6004msec) 00:09:00.788 slat (usec): min=4, max=5249, avg=45.98, stdev=227.08 00:09:00.788 clat (usec): min=797, max=18996, avg=8210.10, stdev=1649.74 00:09:00.788 lat (usec): min=830, max=19006, avg=8256.09, stdev=1662.83 00:09:00.788 clat percentiles (usec): 00:09:00.788 | 1.00th=[ 3982], 5.00th=[ 5211], 10.00th=[ 5997], 20.00th=[ 7111], 00:09:00.788 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8455], 00:09:00.788 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[10945], 00:09:00.788 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14746], 99.95th=[15401], 00:09:00.788 | 99.99th=[17433] 00:09:00.788 bw ( KiB/s): min= 4774, max=33352, per=51.88%, avg=22452.91, stdev=6960.84, samples=11 00:09:00.788 iops : min= 1193, max= 8338, avg=5613.18, stdev=1740.34, samples=11 00:09:00.788 write: IOPS=6302, BW=24.6MiB/s (25.8MB/s)(132MiB/5361msec); 0 zone resets 00:09:00.788 slat (usec): min=14, max=1858, avg=56.49, stdev=146.84 00:09:00.788 clat (usec): min=739, max=15123, avg=6770.66, stdev=1520.02 00:09:00.788 lat (usec): min=768, max=15148, avg=6827.15, stdev=1530.63 00:09:00.788 clat percentiles (usec): 00:09:00.788 | 1.00th=[ 3032], 5.00th=[ 3884], 10.00th=[ 4424], 20.00th=[ 5538], 00:09:00.788 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:09:00.788 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8225], 95.00th=[ 8586], 00:09:00.788 | 99.00th=[10683], 99.50th=[11731], 99.90th=[13304], 99.95th=[13566], 00:09:00.788 | 99.99th=[14091] 00:09:00.788 bw ( KiB/s): min= 4766, max=33032, per=89.20%, avg=22487.82, stdev=6890.12, samples=11 00:09:00.788 iops : min= 1191, max= 8258, avg=5621.91, stdev=1722.66, samples=11 00:09:00.788 lat (usec) : 750=0.01%, 1000=0.01% 00:09:00.788 lat (msec) : 2=0.09%, 4=2.62%, 10=90.21%, 20=7.08% 00:09:00.788 cpu : usr=5.56%, sys=22.79%, ctx=6173, majf=0, minf=127 00:09:00.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:00.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.788 issued rwts: total=64957,33788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.788 00:09:00.788 Run status group 0 (all jobs): 00:09:00.788 READ: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=254MiB (266MB), run=6004-6004msec 00:09:00.788 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=132MiB (138MB), run=5361-5361msec 00:09:00.788 00:09:00.788 Disk stats (read/write): 00:09:00.788 nvme0n1: ios=64083/33171, merge=0/0, ticks=494359/210044, in_queue=704403, util=98.70% 00:09:00.788 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:00.788 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.788 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:00.788 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:00.788 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.047 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.047 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:01.047 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:01.047 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.306 rmmod nvme_tcp 00:09:01.306 rmmod nvme_fabrics 00:09:01.306 rmmod nvme_keyring 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 70004 ']' 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 70004 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 70004 ']' 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 70004 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70004 00:09:01.306 killing process with pid 70004 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70004' 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 70004 00:09:01.306 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 70004 00:09:01.874 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.874 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.874 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.874 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:01.874 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:01.874 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.874 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:01.874 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:01.875 ************************************ 00:09:01.875 END TEST nvmf_target_multipath 00:09:01.875 ************************************ 00:09:01.875 00:09:01.875 real 0m21.318s 00:09:01.875 user 1m23.028s 00:09:01.875 sys 0m6.346s 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.875 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.135 ************************************ 00:09:02.135 START TEST nvmf_zcopy 00:09:02.135 ************************************ 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:02.135 * Looking for test storage... 00:09:02.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.135 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.136 --rc genhtml_branch_coverage=1 00:09:02.136 --rc genhtml_function_coverage=1 00:09:02.136 --rc genhtml_legend=1 00:09:02.136 --rc geninfo_all_blocks=1 00:09:02.136 --rc geninfo_unexecuted_blocks=1 00:09:02.136 00:09:02.136 ' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.136 --rc genhtml_branch_coverage=1 00:09:02.136 --rc genhtml_function_coverage=1 00:09:02.136 --rc genhtml_legend=1 00:09:02.136 --rc geninfo_all_blocks=1 00:09:02.136 --rc geninfo_unexecuted_blocks=1 00:09:02.136 00:09:02.136 ' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.136 --rc genhtml_branch_coverage=1 00:09:02.136 --rc genhtml_function_coverage=1 00:09:02.136 --rc genhtml_legend=1 00:09:02.136 --rc geninfo_all_blocks=1 00:09:02.136 --rc geninfo_unexecuted_blocks=1 00:09:02.136 00:09:02.136 ' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.136 --rc genhtml_branch_coverage=1 00:09:02.136 --rc genhtml_function_coverage=1 00:09:02.136 --rc genhtml_legend=1 00:09:02.136 --rc geninfo_all_blocks=1 00:09:02.136 --rc geninfo_unexecuted_blocks=1 00:09:02.136 00:09:02.136 ' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.136 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.136 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:02.395 Cannot find device "nvmf_init_br" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:02.395 Cannot find device "nvmf_init_br2" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:02.395 Cannot find device "nvmf_tgt_br" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:02.395 Cannot find device "nvmf_tgt_br2" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:02.395 Cannot find device "nvmf_init_br" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:02.395 Cannot find device "nvmf_init_br2" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:02.395 Cannot find device "nvmf_tgt_br" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:02.395 Cannot find device "nvmf_tgt_br2" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:02.395 Cannot find device "nvmf_br" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:02.395 Cannot find device "nvmf_init_if" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:02.395 Cannot find device "nvmf_init_if2" 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:02.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:02.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:02.395 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:02.396 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:02.396 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:02.396 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:02.396 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.396 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:02.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:09:02.655 00:09:02.655 --- 10.0.0.3 ping statistics --- 00:09:02.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.655 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:02.655 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:02.655 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:09:02.655 00:09:02.655 --- 10.0.0.4 ping statistics --- 00:09:02.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.655 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:02.655 00:09:02.655 --- 10.0.0.1 ping statistics --- 00:09:02.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.655 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:02.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:09:02.655 00:09:02.655 --- 10.0.0.2 ping statistics --- 00:09:02.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.655 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.655 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=70664 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 70664 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 70664 ']' 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.655 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.655 [2024-12-16 11:29:08.067069] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:09:02.655 [2024-12-16 11:29:08.067473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.915 [2024-12-16 11:29:08.218684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.915 [2024-12-16 11:29:08.305908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.915 [2024-12-16 11:29:08.306019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.915 [2024-12-16 11:29:08.306031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.915 [2024-12-16 11:29:08.306040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.915 [2024-12-16 11:29:08.306047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.915 [2024-12-16 11:29:08.306601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.851 [2024-12-16 11:29:09.194959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.851 [2024-12-16 11:29:09.211202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.851 malloc0 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.851 { 00:09:03.851 "params": { 00:09:03.851 "name": "Nvme$subsystem", 00:09:03.851 "trtype": "$TEST_TRANSPORT", 00:09:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.851 "adrfam": "ipv4", 00:09:03.851 "trsvcid": "$NVMF_PORT", 00:09:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.851 "hdgst": ${hdgst:-false}, 00:09:03.851 "ddgst": ${ddgst:-false} 00:09:03.851 }, 00:09:03.851 "method": "bdev_nvme_attach_controller" 00:09:03.851 } 00:09:03.851 EOF 00:09:03.851 )") 00:09:03.851 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:04.110 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:04.110 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:04.110 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:04.110 "params": { 00:09:04.110 "name": "Nvme1", 00:09:04.110 "trtype": "tcp", 00:09:04.110 "traddr": "10.0.0.3", 00:09:04.110 "adrfam": "ipv4", 00:09:04.110 "trsvcid": "4420", 00:09:04.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.110 "hdgst": false, 00:09:04.110 "ddgst": false 00:09:04.110 }, 00:09:04.110 "method": "bdev_nvme_attach_controller" 00:09:04.110 }' 00:09:04.110 [2024-12-16 11:29:09.313640] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:09:04.110 [2024-12-16 11:29:09.313749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70720 ] 00:09:04.110 [2024-12-16 11:29:09.469473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.368 [2024-12-16 11:29:09.526337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.368 Running I/O for 10 seconds... 00:09:06.681 5339.00 IOPS, 41.71 MiB/s [2024-12-16T11:29:13.032Z] 5429.00 IOPS, 42.41 MiB/s [2024-12-16T11:29:13.968Z] 5657.33 IOPS, 44.20 MiB/s [2024-12-16T11:29:14.904Z] 5488.00 IOPS, 42.88 MiB/s [2024-12-16T11:29:15.839Z] 5540.00 IOPS, 43.28 MiB/s [2024-12-16T11:29:16.774Z] 5558.83 IOPS, 43.43 MiB/s [2024-12-16T11:29:18.149Z] 5589.00 IOPS, 43.66 MiB/s [2024-12-16T11:29:19.085Z] 5618.62 IOPS, 43.90 MiB/s [2024-12-16T11:29:20.021Z] 5619.67 IOPS, 43.90 MiB/s [2024-12-16T11:29:20.021Z] 5592.50 IOPS, 43.69 MiB/s 00:09:14.607 Latency(us) 00:09:14.607 [2024-12-16T11:29:20.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.607 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:14.607 Verification LBA range: start 0x0 length 0x1000 00:09:14.607 Nvme1n1 : 10.01 5594.33 43.71 0.00 0.00 22809.47 718.66 51952.17 00:09:14.607 [2024-12-16T11:29:20.021Z] =================================================================================================================== 00:09:14.607 [2024-12-16T11:29:20.021Z] Total : 5594.33 43.71 0.00 0.00 22809.47 718.66 51952.17 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70838 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:14.607 { 00:09:14.607 "params": { 00:09:14.607 "name": "Nvme$subsystem", 00:09:14.607 "trtype": "$TEST_TRANSPORT", 00:09:14.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.607 "adrfam": "ipv4", 00:09:14.607 "trsvcid": "$NVMF_PORT", 00:09:14.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.607 "hdgst": ${hdgst:-false}, 00:09:14.607 "ddgst": ${ddgst:-false} 00:09:14.607 }, 00:09:14.607 "method": "bdev_nvme_attach_controller" 00:09:14.607 } 00:09:14.607 EOF 00:09:14.607 )") 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:14.607 [2024-12-16 11:29:19.966921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.607 [2024-12-16 11:29:19.966983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:14.607 2024/12/16 11:29:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:14.607 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:14.607 "params": { 00:09:14.607 "name": "Nvme1", 00:09:14.607 "trtype": "tcp", 00:09:14.607 "traddr": "10.0.0.3", 00:09:14.607 "adrfam": "ipv4", 00:09:14.607 "trsvcid": "4420", 00:09:14.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.607 "hdgst": false, 00:09:14.607 "ddgst": false 00:09:14.607 }, 00:09:14.607 "method": "bdev_nvme_attach_controller" 00:09:14.607 }' 00:09:14.607 [2024-12-16 11:29:19.978833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.607 [2024-12-16 11:29:19.978862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.607 2024/12/16 11:29:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.607 [2024-12-16 11:29:19.990831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.607 [2024-12-16 11:29:19.990878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.607 2024/12/16 11:29:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.607 [2024-12-16 11:29:20.002836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.607 [2024-12-16 11:29:20.002862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.607 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.607 [2024-12-16 11:29:20.014840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.607 [2024-12-16 11:29:20.014867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.607 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.866 [2024-12-16 11:29:20.025508] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:09:14.866 [2024-12-16 11:29:20.026213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70838 ] 00:09:14.866 [2024-12-16 11:29:20.026839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.866 [2024-12-16 11:29:20.026865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.866 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.866 [2024-12-16 11:29:20.038840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.866 [2024-12-16 11:29:20.038871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.866 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.866 [2024-12-16 11:29:20.050851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.866 [2024-12-16 11:29:20.050892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.866 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.866 [2024-12-16 11:29:20.062850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.866 [2024-12-16 11:29:20.062880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.074852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.074927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.086854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.086910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.098926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.098970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.114857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.114940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.126865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.126942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.138917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.138961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.150926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.150971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.162899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.162946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.174919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.174966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.181740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.867 [2024-12-16 11:29:20.186920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.186966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.198943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.199027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.210943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.211032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.222929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.222983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.234934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.234989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.246199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.867 [2024-12-16 11:29:20.246930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.246957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.258940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.258997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:14.867 [2024-12-16 11:29:20.270967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.867 [2024-12-16 11:29:20.271060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.867 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.126 [2024-12-16 11:29:20.282955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.126 [2024-12-16 11:29:20.283004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.126 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.126 [2024-12-16 11:29:20.294996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.126 [2024-12-16 11:29:20.295101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.126 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.126 [2024-12-16 11:29:20.306972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.126 [2024-12-16 11:29:20.307064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.126 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.126 [2024-12-16 11:29:20.318944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.319002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.330959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.331059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.343008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.343080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.355004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.355079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.366996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.367061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.378989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.379031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.391015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.391057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.403062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.403089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.415078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.415110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.427041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.427076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.439096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.439133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 Running I/O for 5 seconds... 00:09:15.127 [2024-12-16 11:29:20.451035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.451067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.468417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.468492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.485043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.485123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.501827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.501881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.519244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.519298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.127 [2024-12-16 11:29:20.535983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-12-16 11:29:20.536062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.552248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.552300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.568492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.568558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.579770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.579844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.595175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.595238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.612066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.612143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.627606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.627657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.638343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.638395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.654090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.654141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.672242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.672301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.689092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.689145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.705589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.705646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.721963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.722030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.386 [2024-12-16 11:29:20.738846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-12-16 11:29:20.738893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.387 [2024-12-16 11:29:20.754837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-12-16 11:29:20.754950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.387 [2024-12-16 11:29:20.772242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-12-16 11:29:20.772278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.387 [2024-12-16 11:29:20.788496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-12-16 11:29:20.788551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.804331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.804416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.821175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.821227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.837603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.837656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.854448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.854503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.870472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.870546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.882083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.882155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.897261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.897331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.914656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.914720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.933181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.933224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.949822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.949864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.966405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.966502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:20.984644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:20.984699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:21.001485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:21.001540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:21.018581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:21.018621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:21.034658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:21.034697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:21.044956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:21.045033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.649 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.649 [2024-12-16 11:29:21.060258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.649 [2024-12-16 11:29:21.060296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.076166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.929 [2024-12-16 11:29:21.076220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.092556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.929 [2024-12-16 11:29:21.092611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.108058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.929 [2024-12-16 11:29:21.108121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.125421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.929 [2024-12-16 11:29:21.125497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.141430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.929 [2024-12-16 11:29:21.141469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.158616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.929 [2024-12-16 11:29:21.158667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.175305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.929 [2024-12-16 11:29:21.175361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.929 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.929 [2024-12-16 11:29:21.191509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.191568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.208109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.208168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.224572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.224640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.235903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.235956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.251156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.251207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.266946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.267035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.277891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.277943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.293784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.293848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.309622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.309682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.319582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.319642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:15.930 [2024-12-16 11:29:21.335415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.930 [2024-12-16 11:29:21.335470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.930 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.189 [2024-12-16 11:29:21.352255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.189 [2024-12-16 11:29:21.352295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.189 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.189 [2024-12-16 11:29:21.368711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.189 [2024-12-16 11:29:21.368751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.189 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.189 [2024-12-16 11:29:21.385356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.189 [2024-12-16 11:29:21.385398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.401932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.401975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.418018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.418063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.433644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.433686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 10776.00 IOPS, 84.19 MiB/s [2024-12-16T11:29:21.604Z] [2024-12-16 11:29:21.450600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.450639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.466629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.466672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.484839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.484880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.501980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.502034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.518301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.518338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.534914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.534985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.551979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.552028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.568461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.568497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.584354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.584395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.190 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.190 [2024-12-16 11:29:21.600633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.190 [2024-12-16 11:29:21.600673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.617824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.617865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.634056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.634097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.650587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.650633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.666877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.666935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.685540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.685587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.701270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.701311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.719386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.719438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.735452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.735499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.745158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.745199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.759680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.759742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.775203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.775251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.786256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.786309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.801298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.450 [2024-12-16 11:29:21.801344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.450 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.450 [2024-12-16 11:29:21.818062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.451 [2024-12-16 11:29:21.818118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.451 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.451 [2024-12-16 11:29:21.834689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.451 [2024-12-16 11:29:21.834729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.451 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.451 [2024-12-16 11:29:21.851300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.451 [2024-12-16 11:29:21.851342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.451 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.710 [2024-12-16 11:29:21.868252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.710 [2024-12-16 11:29:21.868294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.710 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.710 [2024-12-16 11:29:21.884895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.710 [2024-12-16 11:29:21.884968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.710 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.710 [2024-12-16 11:29:21.900734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.710 [2024-12-16 11:29:21.900775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.710 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.710 [2024-12-16 11:29:21.911695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.710 [2024-12-16 11:29:21.911743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.710 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.710 [2024-12-16 11:29:21.927209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.710 [2024-12-16 11:29:21.927253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.710 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.710 [2024-12-16 11:29:21.945738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.710 [2024-12-16 11:29:21.945808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:21.961540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:21.961594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:21.978760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:21.978816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:21.995829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:21.995872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:22.012230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:22.012275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:22.029721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:22.029768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:22.046065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:22.046121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:22.063886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:22.063929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:22.079745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:22.079793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:22.097206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:22.097248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.711 [2024-12-16 11:29:22.113720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.711 [2024-12-16 11:29:22.113764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.711 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.130145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.130187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.146434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.146489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.157422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.157472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.172692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.172746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.190180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.190219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.205707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.205748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.216708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.216748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.232683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.232728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.248731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.248802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.264531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.264595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.274336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.969 [2024-12-16 11:29:22.274389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.969 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.969 [2024-12-16 11:29:22.290063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.970 [2024-12-16 11:29:22.290122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.970 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.970 [2024-12-16 11:29:22.305693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.970 [2024-12-16 11:29:22.305749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.970 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.970 [2024-12-16 11:29:22.316490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.970 [2024-12-16 11:29:22.316530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.970 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.970 [2024-12-16 11:29:22.332075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.970 [2024-12-16 11:29:22.332133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.970 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.970 [2024-12-16 11:29:22.349277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.970 [2024-12-16 11:29:22.349329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.970 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.970 [2024-12-16 11:29:22.366528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.970 [2024-12-16 11:29:22.366623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.970 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.228 [2024-12-16 11:29:22.385023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.228 [2024-12-16 11:29:22.385077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.228 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.228 [2024-12-16 11:29:22.401127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.401176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.419392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.419456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.436509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.436557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 10817.00 IOPS, 84.51 MiB/s [2024-12-16T11:29:22.643Z] [2024-12-16 11:29:22.451834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.451885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.468647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.468695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.485392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.485461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.496861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.496914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.512373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.512417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.529079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.529125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.545656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.545702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.562857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.562905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.578779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.578822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.590432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.590486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.606429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.606502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.621452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.621512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.229 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.229 [2024-12-16 11:29:22.637960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.229 [2024-12-16 11:29:22.638022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.654838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.654900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.671117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.671163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.687200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.687243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.698044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.698080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.713198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.713236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.730912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.730951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.746979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.747044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.762956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.763021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.779997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.780063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.797198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.797283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.488 [2024-12-16 11:29:22.814261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.488 [2024-12-16 11:29:22.814318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.488 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.489 [2024-12-16 11:29:22.831262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.489 [2024-12-16 11:29:22.831342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.489 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.489 [2024-12-16 11:29:22.847338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.489 [2024-12-16 11:29:22.847396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.489 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.489 [2024-12-16 11:29:22.864498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.489 [2024-12-16 11:29:22.864548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.489 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.489 [2024-12-16 11:29:22.880366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.489 [2024-12-16 11:29:22.880424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.489 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.489 [2024-12-16 11:29:22.891437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.489 [2024-12-16 11:29:22.891498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.489 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:22.907513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:22.907593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:22.924221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:22.924280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:22.941032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:22.941091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:22.958075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:22.958125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:22.974149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:22.974227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:22.990548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:22.990610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:23.007213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:23.007264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:23.023438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:23.023503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:23.039593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:23.039647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.747 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.747 [2024-12-16 11:29:23.058850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.747 [2024-12-16 11:29:23.058912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.748 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.748 [2024-12-16 11:29:23.074481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.748 [2024-12-16 11:29:23.074532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.748 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.748 [2024-12-16 11:29:23.091736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.748 [2024-12-16 11:29:23.091808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.748 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.748 [2024-12-16 11:29:23.108125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.748 [2024-12-16 11:29:23.108189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.748 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.748 [2024-12-16 11:29:23.125269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.748 [2024-12-16 11:29:23.125324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.748 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.748 [2024-12-16 11:29:23.142102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.748 [2024-12-16 11:29:23.142143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.748 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.748 [2024-12-16 11:29:23.158930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.748 [2024-12-16 11:29:23.158978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.175837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.175884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.192909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.192964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.209802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.209851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.226353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.226402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.243485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.243531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.259704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.259746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.276643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.276691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.292939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.292996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.309874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.309928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.326475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.326530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.342784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.342824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.359030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.359070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.369638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.369678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.007 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.007 [2024-12-16 11:29:23.384295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.007 [2024-12-16 11:29:23.384338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.008 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.008 [2024-12-16 11:29:23.401340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.008 [2024-12-16 11:29:23.401383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.008 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.008 [2024-12-16 11:29:23.417610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.008 [2024-12-16 11:29:23.417655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.266 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.266 [2024-12-16 11:29:23.434199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.266 [2024-12-16 11:29:23.434244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.266 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.266 10844.67 IOPS, 84.72 MiB/s [2024-12-16T11:29:23.680Z] [2024-12-16 11:29:23.449809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.266 [2024-12-16 11:29:23.449851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.266 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.266 [2024-12-16 11:29:23.466402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.266 [2024-12-16 11:29:23.466447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.266 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.266 [2024-12-16 11:29:23.482520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.266 [2024-12-16 11:29:23.482571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.266 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.266 [2024-12-16 11:29:23.499300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.266 [2024-12-16 11:29:23.499347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.514892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.514949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.531704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.531749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.548189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.548234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.564191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.564233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.574646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.574687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.589144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.589184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.600417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.600466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.611607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.611642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.629460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.629498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.645834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.645872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.662142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.662183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.267 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.267 [2024-12-16 11:29:23.679216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.267 [2024-12-16 11:29:23.679253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.525 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.525 [2024-12-16 11:29:23.695229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.525 [2024-12-16 11:29:23.695271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.525 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.525 [2024-12-16 11:29:23.712641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.525 [2024-12-16 11:29:23.712686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.525 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.525 [2024-12-16 11:29:23.728802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.525 [2024-12-16 11:29:23.729380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.525 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.525 [2024-12-16 11:29:23.745843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.525 [2024-12-16 11:29:23.745891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.525 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.525 [2024-12-16 11:29:23.763289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.525 [2024-12-16 11:29:23.763331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.525 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.525 [2024-12-16 11:29:23.779467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.525 [2024-12-16 11:29:23.779514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.525 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.525 [2024-12-16 11:29:23.795839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.525 [2024-12-16 11:29:23.796024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.807431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.807477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.822083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.822128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.838728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.838802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.855759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.855820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.871767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.871828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.888462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.888525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.904640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.904705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.920360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.920423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.526 [2024-12-16 11:29:23.935981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.526 [2024-12-16 11:29:23.936058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.526 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:23.946855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:23.946916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:23.961746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:23.961813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:23.978255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:23.978320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:23.995863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:23.995936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.013238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.013305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.030481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.030568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.047468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.047536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.064360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.064422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.080461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.080521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.096998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.097071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.113908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.113967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.129770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.129832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.140605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.140656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.156118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.156184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.172610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.172667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.785 [2024-12-16 11:29:24.189930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.785 [2024-12-16 11:29:24.189991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.785 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.206123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.206173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.223162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.223208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.240225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.240277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.257787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.257855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.275474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.275524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.290848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.290893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.308030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.308074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.324395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.324444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.343764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.343818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.359491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.359550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.376278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.376323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.393475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.393527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.409305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.409349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.425233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.425281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.044 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.044 [2024-12-16 11:29:24.435845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.044 [2024-12-16 11:29:24.435900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.045 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.045 10887.25 IOPS, 85.06 MiB/s [2024-12-16T11:29:24.459Z] [2024-12-16 11:29:24.451272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.045 [2024-12-16 11:29:24.451321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.045 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.467068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.467129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.483884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.483936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.502452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.502531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.519631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.519691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.536409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.536476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.556120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.556167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.571769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.571816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.587951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.588028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.599270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.599314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.614951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.615020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.631208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.631261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.648715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.648758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.664826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.664871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.680797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.680868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.695339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.695405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.304 [2024-12-16 11:29:24.715307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-16 11:29:24.715376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.730574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.730645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.746058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.746121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.762153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.762227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.779400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.779643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.797461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.797519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.815950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.816000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.830515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.830696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.847307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.847479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.864163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.864350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.881342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.881526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.897741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.897899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.913900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.914072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.930955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.931139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.946951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.947171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.563 [2024-12-16 11:29:24.964567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.563 [2024-12-16 11:29:24.964611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.563 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.832 [2024-12-16 11:29:24.980863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.832 [2024-12-16 11:29:24.980906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.832 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.832 [2024-12-16 11:29:24.996612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.832 [2024-12-16 11:29:24.996790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.832 2024/12/16 11:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.832 [2024-12-16 11:29:25.007245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.832 [2024-12-16 11:29:25.007283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.833 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.833 [2024-12-16 11:29:25.022076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.833 [2024-12-16 11:29:25.022119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.833 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.833 [2024-12-16 11:29:25.038159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.833 [2024-12-16 11:29:25.038199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.833 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.833 [2024-12-16 11:29:25.048710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.833 [2024-12-16 11:29:25.048753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.833 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.833 [2024-12-16 11:29:25.064112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.833 [2024-12-16 11:29:25.064151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.833 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.833 [2024-12-16 11:29:25.080391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.833 [2024-12-16 11:29:25.080430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.833 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.833 [2024-12-16 11:29:25.096802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.834 [2024-12-16 11:29:25.096844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.834 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.834 [2024-12-16 11:29:25.115459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.834 [2024-12-16 11:29:25.115503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.834 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.834 [2024-12-16 11:29:25.130519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.834 [2024-12-16 11:29:25.130569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.834 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.834 [2024-12-16 11:29:25.140383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.834 [2024-12-16 11:29:25.140421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.834 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.834 [2024-12-16 11:29:25.155521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.835 [2024-12-16 11:29:25.155570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.835 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.835 [2024-12-16 11:29:25.170799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.835 [2024-12-16 11:29:25.170865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.835 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.835 [2024-12-16 11:29:25.187448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.835 [2024-12-16 11:29:25.187526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.835 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.835 [2024-12-16 11:29:25.204274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.835 [2024-12-16 11:29:25.204318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.835 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.835 [2024-12-16 11:29:25.220843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.835 [2024-12-16 11:29:25.220897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.836 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.836 [2024-12-16 11:29:25.236823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.836 [2024-12-16 11:29:25.236873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.836 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.097 [2024-12-16 11:29:25.252918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.097 [2024-12-16 11:29:25.252965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.097 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.097 [2024-12-16 11:29:25.262801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.097 [2024-12-16 11:29:25.262841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.097 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.097 [2024-12-16 11:29:25.278882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.097 [2024-12-16 11:29:25.278925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.294110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.294149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.310045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.310083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.328645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.328693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.344926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.344969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.362955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.363022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.378274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.378322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.389324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.389366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.404530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.404712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.420908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.420950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.437893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.437936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 10817.60 IOPS, 84.51 MiB/s [2024-12-16T11:29:25.512Z] [2024-12-16 11:29:25.453039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.453087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 00:09:20.098 Latency(us) 00:09:20.098 [2024-12-16T11:29:25.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.098 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:20.098 Nvme1n1 : 5.01 10819.48 84.53 0.00 0.00 11814.73 5093.93 23235.49 00:09:20.098 [2024-12-16T11:29:25.512Z] =================================================================================================================== 00:09:20.098 [2024-12-16T11:29:25.512Z] Total : 10819.48 84.53 0.00 0.00 11814.73 5093.93 23235.49 00:09:20.098 [2024-12-16 11:29:25.462800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.462841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.474794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.474834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.486813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.486992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.098 [2024-12-16 11:29:25.498801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.098 [2024-12-16 11:29:25.498833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.098 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.510783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.510812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.361 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.522789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.522821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.361 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.534799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.534830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.361 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.546794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.546825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.361 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.558794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.558824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.361 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.570799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.570830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.361 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.582796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.582824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.361 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.361 [2024-12-16 11:29:25.594804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.361 [2024-12-16 11:29:25.594833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.606802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.606830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.618809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.618837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.630822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.630853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.642829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.642856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.654828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.654856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.666832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.666863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.678844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.678873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.690836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.690864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.702838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.702866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.714840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.714867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.726840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.726868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.738844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.738873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 [2024-12-16 11:29:25.750848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.362 [2024-12-16 11:29:25.750878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.362 2024/12/16 11:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.362 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70838) - No such process 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70838 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.362 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.621 delay0 00:09:20.621 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.621 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:20.621 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.621 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.621 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.621 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:20.621 [2024-12-16 11:29:25.954973] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:27.240 Initializing NVMe Controllers 00:09:27.240 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:27.240 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:27.240 Initialization complete. Launching workers. 00:09:27.240 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 96 00:09:27.240 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 383, failed to submit 33 00:09:27.240 success 214, unsuccessful 169, failed 0 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.240 rmmod nvme_tcp 00:09:27.240 rmmod nvme_fabrics 00:09:27.240 rmmod nvme_keyring 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 70664 ']' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 70664 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 70664 ']' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 70664 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70664 00:09:27.240 killing process with pid 70664 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70664' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 70664 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 70664 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:27.240 00:09:27.240 real 0m25.268s 00:09:27.240 user 0m40.357s 00:09:27.240 sys 0m6.979s 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.240 ************************************ 00:09:27.240 END TEST nvmf_zcopy 00:09:27.240 ************************************ 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.240 11:29:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.499 ************************************ 00:09:27.499 START TEST nvmf_nmic 00:09:27.499 ************************************ 00:09:27.499 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.499 * Looking for test storage... 00:09:27.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.499 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.499 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.499 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.499 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.499 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.500 --rc genhtml_branch_coverage=1 00:09:27.500 --rc genhtml_function_coverage=1 00:09:27.500 --rc genhtml_legend=1 00:09:27.500 --rc geninfo_all_blocks=1 00:09:27.500 --rc geninfo_unexecuted_blocks=1 00:09:27.500 00:09:27.500 ' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.500 --rc genhtml_branch_coverage=1 00:09:27.500 --rc genhtml_function_coverage=1 00:09:27.500 --rc genhtml_legend=1 00:09:27.500 --rc geninfo_all_blocks=1 00:09:27.500 --rc geninfo_unexecuted_blocks=1 00:09:27.500 00:09:27.500 ' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.500 --rc genhtml_branch_coverage=1 00:09:27.500 --rc genhtml_function_coverage=1 00:09:27.500 --rc genhtml_legend=1 00:09:27.500 --rc geninfo_all_blocks=1 00:09:27.500 --rc geninfo_unexecuted_blocks=1 00:09:27.500 00:09:27.500 ' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.500 --rc genhtml_branch_coverage=1 00:09:27.500 --rc genhtml_function_coverage=1 00:09:27.500 --rc genhtml_legend=1 00:09:27.500 --rc geninfo_all_blocks=1 00:09:27.500 --rc geninfo_unexecuted_blocks=1 00:09:27.500 00:09:27.500 ' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.500 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:27.500 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:27.501 Cannot find device "nvmf_init_br" 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:27.501 Cannot find device "nvmf_init_br2" 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:27.501 Cannot find device "nvmf_tgt_br" 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.501 Cannot find device "nvmf_tgt_br2" 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:27.501 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:27.759 Cannot find device "nvmf_init_br" 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:27.759 Cannot find device "nvmf_init_br2" 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:27.759 Cannot find device "nvmf_tgt_br" 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:27.759 Cannot find device "nvmf_tgt_br2" 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:27.759 Cannot find device "nvmf_br" 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:27.759 Cannot find device "nvmf_init_if" 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:27.759 Cannot find device "nvmf_init_if2" 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.759 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.759 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:28.018 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:28.019 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:28.019 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:09:28.019 00:09:28.019 --- 10.0.0.3 ping statistics --- 00:09:28.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.019 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:28.019 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:28.019 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:28.019 00:09:28.019 --- 10.0.0.4 ping statistics --- 00:09:28.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.019 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:28.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:28.019 00:09:28.019 --- 10.0.0.1 ping statistics --- 00:09:28.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.019 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:28.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:09:28.019 00:09:28.019 --- 10.0.0.2 ping statistics --- 00:09:28.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.019 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=71225 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 71225 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 71225 ']' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.019 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.019 [2024-12-16 11:29:33.326761] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:09:28.019 [2024-12-16 11:29:33.326849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.278 [2024-12-16 11:29:33.482777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.278 [2024-12-16 11:29:33.557789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.278 [2024-12-16 11:29:33.557869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.278 [2024-12-16 11:29:33.557892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.278 [2024-12-16 11:29:33.557903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.278 [2024-12-16 11:29:33.557912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.278 [2024-12-16 11:29:33.559552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.278 [2024-12-16 11:29:33.559607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.278 [2024-12-16 11:29:33.559700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.278 [2024-12-16 11:29:33.559715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.536 [2024-12-16 11:29:33.775140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.536 Malloc0 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.536 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.537 [2024-12-16 11:29:33.853726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:28.537 test case1: single bdev can't be used in multiple subsystems 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.537 [2024-12-16 11:29:33.877478] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:28.537 [2024-12-16 11:29:33.877519] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:28.537 [2024-12-16 11:29:33.877533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.537 2024/12/16 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:28.537 request: 00:09:28.537 { 00:09:28.537 "method": "nvmf_subsystem_add_ns", 00:09:28.537 "params": { 00:09:28.537 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:28.537 "namespace": { 00:09:28.537 "bdev_name": "Malloc0", 00:09:28.537 "no_auto_visible": false, 00:09:28.537 "hide_metadata": false 00:09:28.537 } 00:09:28.537 } 00:09:28.537 } 00:09:28.537 Got JSON-RPC error response 00:09:28.537 GoRPCClient: error on JSON-RPC call 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:28.537 Adding namespace failed - expected result. 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:28.537 test case2: host connect to nvmf target in multiple paths 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.537 [2024-12-16 11:29:33.889597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.537 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:28.795 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:29.054 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.054 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:29.054 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.054 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:29.054 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:30.953 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:30.953 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:30.953 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.953 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:30.953 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.953 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:30.953 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:30.953 [global] 00:09:30.953 thread=1 00:09:30.953 invalidate=1 00:09:30.953 rw=write 00:09:30.953 time_based=1 00:09:30.953 runtime=1 00:09:30.953 ioengine=libaio 00:09:30.953 direct=1 00:09:30.953 bs=4096 00:09:30.953 iodepth=1 00:09:30.953 norandommap=0 00:09:30.953 numjobs=1 00:09:30.953 00:09:30.953 verify_dump=1 00:09:30.953 verify_backlog=512 00:09:30.953 verify_state_save=0 00:09:30.953 do_verify=1 00:09:30.953 verify=crc32c-intel 00:09:30.953 [job0] 00:09:30.953 filename=/dev/nvme0n1 00:09:30.953 Could not set queue depth (nvme0n1) 00:09:31.212 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.212 fio-3.35 00:09:31.212 Starting 1 thread 00:09:32.145 00:09:32.145 job0: (groupid=0, jobs=1): err= 0: pid=71321: Mon Dec 16 11:29:37 2024 00:09:32.145 read: IOPS=2740, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:09:32.145 slat (nsec): min=11864, max=70308, avg=17135.13, stdev=6314.57 00:09:32.145 clat (usec): min=130, max=951, avg=175.63, stdev=24.73 00:09:32.145 lat (usec): min=142, max=964, avg=192.76, stdev=25.87 00:09:32.145 clat percentiles (usec): 00:09:32.145 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:32.145 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:32.145 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:09:32.145 | 99.00th=[ 231], 99.50th=[ 241], 99.90th=[ 306], 99.95th=[ 383], 00:09:32.145 | 99.99th=[ 955] 00:09:32.145 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:32.145 slat (usec): min=17, max=139, avg=24.75, stdev= 7.79 00:09:32.145 clat (usec): min=90, max=889, avg=124.86, stdev=25.02 00:09:32.145 lat (usec): min=108, max=918, avg=149.60, stdev=27.03 00:09:32.145 clat percentiles (usec): 00:09:32.145 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:09:32.145 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 125], 00:09:32.145 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 155], 00:09:32.145 | 99.00th=[ 178], 99.50th=[ 255], 99.90th=[ 359], 99.95th=[ 437], 00:09:32.145 | 99.99th=[ 889] 00:09:32.145 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.145 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.145 lat (usec) : 100=0.74%, 250=98.85%, 500=0.38%, 1000=0.03% 00:09:32.145 cpu : usr=2.70%, sys=9.00%, ctx=5815, majf=0, minf=5 00:09:32.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.145 issued rwts: total=2743,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.145 00:09:32.145 Run status group 0 (all jobs): 00:09:32.145 READ: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:09:32.145 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:32.145 00:09:32.145 Disk stats (read/write): 00:09:32.145 nvme0n1: ios=2610/2648, merge=0/0, ticks=507/368, in_queue=875, util=91.48% 00:09:32.145 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.404 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.404 rmmod nvme_tcp 00:09:32.404 rmmod nvme_fabrics 00:09:32.663 rmmod nvme_keyring 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 71225 ']' 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 71225 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 71225 ']' 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 71225 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71225 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.663 killing process with pid 71225 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71225' 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 71225 00:09:32.663 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 71225 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:32.922 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:33.180 00:09:33.180 real 0m5.846s 00:09:33.180 user 0m18.175s 00:09:33.180 sys 0m1.417s 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.180 ************************************ 00:09:33.180 END TEST nvmf_nmic 00:09:33.180 ************************************ 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.180 ************************************ 00:09:33.180 START TEST nvmf_fio_target 00:09:33.180 ************************************ 00:09:33.180 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.440 * Looking for test storage... 00:09:33.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.440 --rc genhtml_branch_coverage=1 00:09:33.440 --rc genhtml_function_coverage=1 00:09:33.440 --rc genhtml_legend=1 00:09:33.440 --rc geninfo_all_blocks=1 00:09:33.440 --rc geninfo_unexecuted_blocks=1 00:09:33.440 00:09:33.440 ' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.440 --rc genhtml_branch_coverage=1 00:09:33.440 --rc genhtml_function_coverage=1 00:09:33.440 --rc genhtml_legend=1 00:09:33.440 --rc geninfo_all_blocks=1 00:09:33.440 --rc geninfo_unexecuted_blocks=1 00:09:33.440 00:09:33.440 ' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.440 --rc genhtml_branch_coverage=1 00:09:33.440 --rc genhtml_function_coverage=1 00:09:33.440 --rc genhtml_legend=1 00:09:33.440 --rc geninfo_all_blocks=1 00:09:33.440 --rc geninfo_unexecuted_blocks=1 00:09:33.440 00:09:33.440 ' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.440 --rc genhtml_branch_coverage=1 00:09:33.440 --rc genhtml_function_coverage=1 00:09:33.440 --rc genhtml_legend=1 00:09:33.440 --rc geninfo_all_blocks=1 00:09:33.440 --rc geninfo_unexecuted_blocks=1 00:09:33.440 00:09:33.440 ' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.440 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.441 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:33.441 Cannot find device "nvmf_init_br" 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:33.441 Cannot find device "nvmf_init_br2" 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:33.441 Cannot find device "nvmf_tgt_br" 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.441 Cannot find device "nvmf_tgt_br2" 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:33.441 Cannot find device "nvmf_init_br" 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:33.441 Cannot find device "nvmf_init_br2" 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:33.441 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:33.700 Cannot find device "nvmf_tgt_br" 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:33.700 Cannot find device "nvmf_tgt_br2" 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:33.700 Cannot find device "nvmf_br" 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:33.700 Cannot find device "nvmf_init_if" 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:33.700 Cannot find device "nvmf_init_if2" 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.700 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.700 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:33.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:09:33.960 00:09:33.960 --- 10.0.0.3 ping statistics --- 00:09:33.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.960 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:33.960 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:33.960 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:33.960 00:09:33.960 --- 10.0.0.4 ping statistics --- 00:09:33.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.960 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:33.960 00:09:33.960 --- 10.0.0.1 ping statistics --- 00:09:33.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.960 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:33.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:33.960 00:09:33.960 --- 10.0.0.2 ping statistics --- 00:09:33.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.960 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=71559 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 71559 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 71559 ']' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.960 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.960 [2024-12-16 11:29:39.275715] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:09:33.960 [2024-12-16 11:29:39.275840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.219 [2024-12-16 11:29:39.432222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.219 [2024-12-16 11:29:39.518615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.219 [2024-12-16 11:29:39.518711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.219 [2024-12-16 11:29:39.518730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.219 [2024-12-16 11:29:39.518742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.219 [2024-12-16 11:29:39.518753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.219 [2024-12-16 11:29:39.520328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.219 [2024-12-16 11:29:39.520463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.219 [2024-12-16 11:29:39.520633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.219 [2024-12-16 11:29:39.520643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.476 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.477 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:34.477 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.477 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.477 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.477 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.477 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:34.734 [2024-12-16 11:29:40.057529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.734 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.301 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:35.301 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.559 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:35.559 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.818 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:35.818 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.077 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:36.077 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:36.335 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.902 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:36.902 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.160 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:37.160 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.419 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:37.419 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:37.677 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.935 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:37.935 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.194 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:38.194 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:38.762 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:39.021 [2024-12-16 11:29:44.191990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:39.021 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:39.280 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:39.539 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:39.805 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:39.805 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:39.805 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.805 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:39.805 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:39.806 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:41.710 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:41.710 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:41.710 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.710 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:41.710 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.710 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:41.710 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:41.710 [global] 00:09:41.710 thread=1 00:09:41.710 invalidate=1 00:09:41.710 rw=write 00:09:41.710 time_based=1 00:09:41.710 runtime=1 00:09:41.710 ioengine=libaio 00:09:41.710 direct=1 00:09:41.710 bs=4096 00:09:41.710 iodepth=1 00:09:41.710 norandommap=0 00:09:41.710 numjobs=1 00:09:41.710 00:09:41.710 verify_dump=1 00:09:41.710 verify_backlog=512 00:09:41.710 verify_state_save=0 00:09:41.710 do_verify=1 00:09:41.710 verify=crc32c-intel 00:09:41.710 [job0] 00:09:41.710 filename=/dev/nvme0n1 00:09:41.710 [job1] 00:09:41.710 filename=/dev/nvme0n2 00:09:41.710 [job2] 00:09:41.710 filename=/dev/nvme0n3 00:09:41.711 [job3] 00:09:41.711 filename=/dev/nvme0n4 00:09:41.711 Could not set queue depth (nvme0n1) 00:09:41.711 Could not set queue depth (nvme0n2) 00:09:41.711 Could not set queue depth (nvme0n3) 00:09:41.711 Could not set queue depth (nvme0n4) 00:09:41.969 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.969 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.969 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.969 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.969 fio-3.35 00:09:41.969 Starting 4 threads 00:09:43.345 00:09:43.345 job0: (groupid=0, jobs=1): err= 0: pid=71849: Mon Dec 16 11:29:48 2024 00:09:43.345 read: IOPS=1146, BW=4587KiB/s (4698kB/s)(4592KiB/1001msec) 00:09:43.345 slat (nsec): min=8913, max=65430, avg=16771.01, stdev=5037.11 00:09:43.345 clat (usec): min=202, max=41424, avg=423.90, stdev=1212.68 00:09:43.345 lat (usec): min=216, max=41436, avg=440.67, stdev=1212.55 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 241], 5.00th=[ 293], 10.00th=[ 314], 20.00th=[ 343], 00:09:43.345 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 396], 00:09:43.345 | 70.00th=[ 412], 80.00th=[ 437], 90.00th=[ 478], 95.00th=[ 498], 00:09:43.345 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 578], 99.95th=[41681], 00:09:43.345 | 99.99th=[41681] 00:09:43.345 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:43.345 slat (usec): min=12, max=129, avg=24.19, stdev= 8.07 00:09:43.345 clat (usec): min=119, max=6049, avg=293.99, stdev=288.35 00:09:43.345 lat (usec): min=148, max=6091, avg=318.18, stdev=290.20 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 153], 5.00th=[ 176], 10.00th=[ 192], 20.00th=[ 219], 00:09:43.345 | 30.00th=[ 243], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 289], 00:09:43.345 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 383], 00:09:43.345 | 99.00th=[ 449], 99.50th=[ 2802], 99.90th=[ 5866], 99.95th=[ 6063], 00:09:43.345 | 99.99th=[ 6063] 00:09:43.345 bw ( KiB/s): min= 6776, max= 6776, per=23.66%, avg=6776.00, stdev= 0.00, samples=1 00:09:43.345 iops : min= 1694, max= 1694, avg=1694.00, stdev= 0.00, samples=1 00:09:43.345 lat (usec) : 250=19.75%, 500=77.83%, 750=2.05% 00:09:43.345 lat (msec) : 2=0.04%, 4=0.22%, 10=0.07%, 50=0.04% 00:09:43.345 cpu : usr=1.40%, sys=4.40%, ctx=2686, majf=0, minf=15 00:09:43.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 issued rwts: total=1148,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.345 job1: (groupid=0, jobs=1): err= 0: pid=71850: Mon Dec 16 11:29:48 2024 00:09:43.345 read: IOPS=1323, BW=5295KiB/s (5422kB/s)(5300KiB/1001msec) 00:09:43.345 slat (nsec): min=11051, max=50968, avg=17116.54, stdev=4296.31 00:09:43.345 clat (usec): min=212, max=41441, avg=422.99, stdev=1129.59 00:09:43.345 lat (usec): min=224, max=41455, avg=440.11, stdev=1129.51 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 251], 5.00th=[ 293], 10.00th=[ 318], 20.00th=[ 343], 00:09:43.345 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 400], 00:09:43.345 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 490], 95.00th=[ 515], 00:09:43.345 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 586], 99.95th=[41681], 00:09:43.345 | 99.99th=[41681] 00:09:43.345 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:43.345 slat (usec): min=12, max=116, avg=24.16, stdev= 6.56 00:09:43.345 clat (usec): min=116, max=672, avg=243.36, stdev=68.25 00:09:43.345 lat (usec): min=141, max=695, avg=267.51, stdev=67.36 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 130], 5.00th=[ 147], 10.00th=[ 161], 20.00th=[ 180], 00:09:43.345 | 30.00th=[ 196], 40.00th=[ 219], 50.00th=[ 243], 60.00th=[ 262], 00:09:43.345 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 359], 00:09:43.345 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 586], 99.95th=[ 676], 00:09:43.345 | 99.99th=[ 676] 00:09:43.345 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:43.345 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:43.345 lat (usec) : 250=29.43%, 500=66.90%, 750=3.64% 00:09:43.345 lat (msec) : 50=0.03% 00:09:43.345 cpu : usr=1.20%, sys=4.90%, ctx=2862, majf=0, minf=12 00:09:43.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 issued rwts: total=1325,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.345 job2: (groupid=0, jobs=1): err= 0: pid=71851: Mon Dec 16 11:29:48 2024 00:09:43.345 read: IOPS=1560, BW=6242KiB/s (6392kB/s)(6248KiB/1001msec) 00:09:43.345 slat (usec): min=13, max=132, avg=22.43, stdev= 9.25 00:09:43.345 clat (usec): min=148, max=1094, avg=278.51, stdev=40.73 00:09:43.345 lat (usec): min=221, max=1120, avg=300.94, stdev=42.54 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 249], 00:09:43.345 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:09:43.345 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 343], 00:09:43.345 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 611], 99.95th=[ 1090], 00:09:43.345 | 99.99th=[ 1090] 00:09:43.345 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:43.345 slat (usec): min=20, max=123, avg=33.62, stdev=14.49 00:09:43.345 clat (usec): min=142, max=421, avg=221.10, stdev=32.55 00:09:43.345 lat (usec): min=163, max=474, avg=254.72, stdev=40.54 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:09:43.345 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 225], 00:09:43.345 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 281], 00:09:43.345 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 412], 00:09:43.345 | 99.99th=[ 420] 00:09:43.345 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:43.345 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:43.345 lat (usec) : 250=56.15%, 500=43.77%, 750=0.06% 00:09:43.345 lat (msec) : 2=0.03% 00:09:43.345 cpu : usr=2.10%, sys=7.50%, ctx=3612, majf=0, minf=3 00:09:43.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 issued rwts: total=1562,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.345 job3: (groupid=0, jobs=1): err= 0: pid=71852: Mon Dec 16 11:29:48 2024 00:09:43.345 read: IOPS=2035, BW=8144KiB/s (8339kB/s)(8152KiB/1001msec) 00:09:43.345 slat (nsec): min=12898, max=52890, avg=16825.12, stdev=3802.37 00:09:43.345 clat (usec): min=178, max=872, avg=236.18, stdev=33.52 00:09:43.345 lat (usec): min=192, max=889, avg=253.00, stdev=34.23 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:09:43.345 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:09:43.345 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 297], 00:09:43.345 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 367], 00:09:43.345 | 99.99th=[ 873] 00:09:43.345 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:43.345 slat (usec): min=15, max=145, avg=27.29, stdev= 7.90 00:09:43.345 clat (usec): min=127, max=2126, avg=205.20, stdev=73.97 00:09:43.345 lat (usec): min=149, max=2145, avg=232.49, stdev=76.36 00:09:43.345 clat percentiles (usec): 00:09:43.345 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:09:43.345 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 202], 00:09:43.345 | 70.00th=[ 212], 80.00th=[ 229], 90.00th=[ 260], 95.00th=[ 285], 00:09:43.345 | 99.00th=[ 474], 99.50th=[ 553], 99.90th=[ 816], 99.95th=[ 979], 00:09:43.345 | 99.99th=[ 2114] 00:09:43.345 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:43.345 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:43.345 lat (usec) : 250=79.54%, 500=20.02%, 750=0.32%, 1000=0.10% 00:09:43.345 lat (msec) : 4=0.02% 00:09:43.345 cpu : usr=2.20%, sys=6.40%, ctx=4086, majf=0, minf=9 00:09:43.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.345 issued rwts: total=2038,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.345 00:09:43.345 Run status group 0 (all jobs): 00:09:43.345 READ: bw=23.7MiB/s (24.8MB/s), 4587KiB/s-8144KiB/s (4698kB/s-8339kB/s), io=23.7MiB (24.9MB), run=1001-1001msec 00:09:43.345 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:43.345 00:09:43.345 Disk stats (read/write): 00:09:43.345 nvme0n1: ios=1074/1218, merge=0/0, ticks=483/344, in_queue=827, util=87.47% 00:09:43.345 nvme0n2: ios=1068/1503, merge=0/0, ticks=467/372, in_queue=839, util=88.55% 00:09:43.345 nvme0n3: ios=1505/1536, merge=0/0, ticks=429/363, in_queue=792, util=89.25% 00:09:43.345 nvme0n4: ios=1536/1974, merge=0/0, ticks=381/433, in_queue=814, util=89.91% 00:09:43.345 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:43.345 [global] 00:09:43.346 thread=1 00:09:43.346 invalidate=1 00:09:43.346 rw=randwrite 00:09:43.346 time_based=1 00:09:43.346 runtime=1 00:09:43.346 ioengine=libaio 00:09:43.346 direct=1 00:09:43.346 bs=4096 00:09:43.346 iodepth=1 00:09:43.346 norandommap=0 00:09:43.346 numjobs=1 00:09:43.346 00:09:43.346 verify_dump=1 00:09:43.346 verify_backlog=512 00:09:43.346 verify_state_save=0 00:09:43.346 do_verify=1 00:09:43.346 verify=crc32c-intel 00:09:43.346 [job0] 00:09:43.346 filename=/dev/nvme0n1 00:09:43.346 [job1] 00:09:43.346 filename=/dev/nvme0n2 00:09:43.346 [job2] 00:09:43.346 filename=/dev/nvme0n3 00:09:43.346 [job3] 00:09:43.346 filename=/dev/nvme0n4 00:09:43.346 Could not set queue depth (nvme0n1) 00:09:43.346 Could not set queue depth (nvme0n2) 00:09:43.346 Could not set queue depth (nvme0n3) 00:09:43.346 Could not set queue depth (nvme0n4) 00:09:43.346 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.346 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.346 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.346 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.346 fio-3.35 00:09:43.346 Starting 4 threads 00:09:44.722 00:09:44.722 job0: (groupid=0, jobs=1): err= 0: pid=71910: Mon Dec 16 11:29:49 2024 00:09:44.722 read: IOPS=1088, BW=4356KiB/s (4460kB/s)(4360KiB/1001msec) 00:09:44.722 slat (nsec): min=16582, max=81365, avg=33214.52, stdev=10376.38 00:09:44.722 clat (usec): min=176, max=641, avg=397.32, stdev=47.75 00:09:44.722 lat (usec): min=195, max=673, avg=430.53, stdev=48.00 00:09:44.722 clat percentiles (usec): 00:09:44.722 | 1.00th=[ 285], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 359], 00:09:44.722 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 400], 60.00th=[ 408], 00:09:44.722 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 457], 95.00th=[ 474], 00:09:44.722 | 99.00th=[ 510], 99.50th=[ 519], 99.90th=[ 537], 99.95th=[ 644], 00:09:44.722 | 99.99th=[ 644] 00:09:44.722 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:44.722 slat (usec): min=28, max=190, avg=41.31, stdev= 9.73 00:09:44.722 clat (usec): min=147, max=580, avg=298.30, stdev=59.16 00:09:44.722 lat (usec): min=182, max=770, avg=339.62, stdev=59.53 00:09:44.722 clat percentiles (usec): 00:09:44.722 | 1.00th=[ 178], 5.00th=[ 219], 10.00th=[ 235], 20.00th=[ 251], 00:09:44.722 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:09:44.722 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 408], 00:09:44.722 | 99.00th=[ 445], 99.50th=[ 465], 99.90th=[ 553], 99.95th=[ 578], 00:09:44.722 | 99.99th=[ 578] 00:09:44.722 bw ( KiB/s): min= 6568, max= 6568, per=22.95%, avg=6568.00, stdev= 0.00, samples=1 00:09:44.722 iops : min= 1642, max= 1642, avg=1642.00, stdev= 0.00, samples=1 00:09:44.722 lat (usec) : 250=11.54%, 500=87.59%, 750=0.88% 00:09:44.722 cpu : usr=1.90%, sys=7.60%, ctx=2627, majf=0, minf=11 00:09:44.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 issued rwts: total=1090,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.723 job1: (groupid=0, jobs=1): err= 0: pid=71911: Mon Dec 16 11:29:49 2024 00:09:44.723 read: IOPS=1712, BW=6849KiB/s (7014kB/s)(6856KiB/1001msec) 00:09:44.723 slat (nsec): min=14707, max=53228, avg=18792.99, stdev=3987.55 00:09:44.723 clat (usec): min=178, max=796, avg=266.58, stdev=44.46 00:09:44.723 lat (usec): min=194, max=827, avg=285.37, stdev=45.29 00:09:44.723 clat percentiles (usec): 00:09:44.723 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 229], 00:09:44.723 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 277], 00:09:44.723 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 330], 00:09:44.723 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 742], 99.95th=[ 799], 00:09:44.723 | 99.99th=[ 799] 00:09:44.723 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:44.723 slat (usec): min=22, max=200, avg=29.18, stdev= 7.89 00:09:44.723 clat (usec): min=132, max=423, avg=215.86, stdev=35.01 00:09:44.723 lat (usec): min=159, max=623, avg=245.04, stdev=36.85 00:09:44.723 clat percentiles (usec): 00:09:44.723 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 186], 00:09:44.723 | 30.00th=[ 194], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 223], 00:09:44.723 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 277], 00:09:44.723 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 367], 99.95th=[ 412], 00:09:44.723 | 99.99th=[ 424] 00:09:44.723 bw ( KiB/s): min= 8192, max= 8192, per=28.63%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.723 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.723 lat (usec) : 250=62.41%, 500=37.51%, 750=0.05%, 1000=0.03% 00:09:44.723 cpu : usr=1.70%, sys=7.00%, ctx=3762, majf=0, minf=9 00:09:44.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 issued rwts: total=1714,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.723 job2: (groupid=0, jobs=1): err= 0: pid=71912: Mon Dec 16 11:29:49 2024 00:09:44.723 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:44.723 slat (nsec): min=14773, max=47366, avg=19540.58, stdev=3821.23 00:09:44.723 clat (usec): min=187, max=3643, avg=280.08, stdev=116.60 00:09:44.723 lat (usec): min=203, max=3659, avg=299.62, stdev=116.86 00:09:44.723 clat percentiles (usec): 00:09:44.723 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 239], 00:09:44.723 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 285], 00:09:44.723 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 338], 00:09:44.723 | 99.00th=[ 371], 99.50th=[ 529], 99.90th=[ 2606], 99.95th=[ 3654], 00:09:44.723 | 99.99th=[ 3654] 00:09:44.723 write: IOPS=2038, BW=8156KiB/s (8352kB/s)(8164KiB/1001msec); 0 zone resets 00:09:44.723 slat (usec): min=22, max=182, avg=29.41, stdev= 7.24 00:09:44.723 clat (usec): min=139, max=5926, avg=230.93, stdev=221.72 00:09:44.723 lat (usec): min=165, max=5965, avg=260.34, stdev=222.61 00:09:44.723 clat percentiles (usec): 00:09:44.723 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 188], 00:09:44.723 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 229], 00:09:44.723 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:09:44.723 | 99.00th=[ 322], 99.50th=[ 379], 99.90th=[ 3752], 99.95th=[ 5866], 00:09:44.723 | 99.99th=[ 5932] 00:09:44.723 bw ( KiB/s): min= 8192, max= 8192, per=28.63%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.723 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.723 lat (usec) : 250=57.45%, 500=42.10%, 750=0.20%, 1000=0.03% 00:09:44.723 lat (msec) : 2=0.03%, 4=0.14%, 10=0.06% 00:09:44.723 cpu : usr=0.90%, sys=7.70%, ctx=3578, majf=0, minf=9 00:09:44.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 issued rwts: total=1536,2041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.723 job3: (groupid=0, jobs=1): err= 0: pid=71913: Mon Dec 16 11:29:49 2024 00:09:44.723 read: IOPS=1036, BW=4148KiB/s (4247kB/s)(4152KiB/1001msec) 00:09:44.723 slat (nsec): min=17488, max=85314, avg=25492.17, stdev=7687.00 00:09:44.723 clat (usec): min=250, max=1047, avg=413.18, stdev=51.89 00:09:44.723 lat (usec): min=306, max=1077, avg=438.67, stdev=52.85 00:09:44.723 clat percentiles (usec): 00:09:44.723 | 1.00th=[ 318], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 375], 00:09:44.723 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 420], 00:09:44.723 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 490], 00:09:44.723 | 99.00th=[ 553], 99.50th=[ 619], 99.90th=[ 709], 99.95th=[ 1045], 00:09:44.723 | 99.99th=[ 1045] 00:09:44.723 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:44.723 slat (usec): min=28, max=203, avg=42.88, stdev=10.30 00:09:44.723 clat (usec): min=160, max=3421, avg=307.04, stdev=119.46 00:09:44.723 lat (usec): min=206, max=3458, avg=349.91, stdev=119.27 00:09:44.723 clat percentiles (usec): 00:09:44.723 | 1.00th=[ 190], 5.00th=[ 217], 10.00th=[ 235], 20.00th=[ 251], 00:09:44.723 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:09:44.723 | 70.00th=[ 330], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 420], 00:09:44.723 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 2474], 99.95th=[ 3425], 00:09:44.723 | 99.99th=[ 3425] 00:09:44.723 bw ( KiB/s): min= 6352, max= 6352, per=22.20%, avg=6352.00, stdev= 0.00, samples=1 00:09:44.723 iops : min= 1588, max= 1588, avg=1588.00, stdev= 0.00, samples=1 00:09:44.723 lat (usec) : 250=11.31%, 500=86.75%, 750=1.75%, 1000=0.04% 00:09:44.723 lat (msec) : 2=0.08%, 4=0.08% 00:09:44.723 cpu : usr=1.60%, sys=7.10%, ctx=2575, majf=0, minf=19 00:09:44.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.723 issued rwts: total=1038,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.723 00:09:44.723 Run status group 0 (all jobs): 00:09:44.723 READ: bw=21.0MiB/s (22.0MB/s), 4148KiB/s-6849KiB/s (4247kB/s-7014kB/s), io=21.0MiB (22.0MB), run=1001-1001msec 00:09:44.723 WRITE: bw=27.9MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.3MB), run=1001-1001msec 00:09:44.723 00:09:44.723 Disk stats (read/write): 00:09:44.723 nvme0n1: ios=1074/1239, merge=0/0, ticks=446/396, in_queue=842, util=88.78% 00:09:44.723 nvme0n2: ios=1585/1694, merge=0/0, ticks=457/379, in_queue=836, util=88.90% 00:09:44.723 nvme0n3: ios=1553/1560, merge=0/0, ticks=466/364, in_queue=830, util=88.59% 00:09:44.723 nvme0n4: ios=1024/1188, merge=0/0, ticks=430/384, in_queue=814, util=89.75% 00:09:44.723 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:44.723 [global] 00:09:44.723 thread=1 00:09:44.723 invalidate=1 00:09:44.723 rw=write 00:09:44.723 time_based=1 00:09:44.723 runtime=1 00:09:44.723 ioengine=libaio 00:09:44.723 direct=1 00:09:44.723 bs=4096 00:09:44.723 iodepth=128 00:09:44.723 norandommap=0 00:09:44.723 numjobs=1 00:09:44.723 00:09:44.723 verify_dump=1 00:09:44.723 verify_backlog=512 00:09:44.723 verify_state_save=0 00:09:44.723 do_verify=1 00:09:44.723 verify=crc32c-intel 00:09:44.723 [job0] 00:09:44.723 filename=/dev/nvme0n1 00:09:44.723 [job1] 00:09:44.723 filename=/dev/nvme0n2 00:09:44.723 [job2] 00:09:44.723 filename=/dev/nvme0n3 00:09:44.723 [job3] 00:09:44.723 filename=/dev/nvme0n4 00:09:44.723 Could not set queue depth (nvme0n1) 00:09:44.723 Could not set queue depth (nvme0n2) 00:09:44.723 Could not set queue depth (nvme0n3) 00:09:44.723 Could not set queue depth (nvme0n4) 00:09:44.723 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.723 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.723 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.723 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.723 fio-3.35 00:09:44.723 Starting 4 threads 00:09:46.099 00:09:46.099 job0: (groupid=0, jobs=1): err= 0: pid=71968: Mon Dec 16 11:29:51 2024 00:09:46.099 read: IOPS=3717, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1005msec) 00:09:46.099 slat (usec): min=5, max=4307, avg=124.97, stdev=601.70 00:09:46.099 clat (usec): min=318, max=20393, avg=16537.76, stdev=1828.53 00:09:46.099 lat (usec): min=3926, max=20923, avg=16662.73, stdev=1738.38 00:09:46.099 clat percentiles (usec): 00:09:46.099 | 1.00th=[ 6390], 5.00th=[13698], 10.00th=[15795], 20.00th=[16188], 00:09:46.099 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16581], 60.00th=[16909], 00:09:46.099 | 70.00th=[17171], 80.00th=[17433], 90.00th=[18220], 95.00th=[18482], 00:09:46.099 | 99.00th=[19268], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:09:46.099 | 99.99th=[20317] 00:09:46.099 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:46.099 slat (usec): min=11, max=4484, avg=122.34, stdev=558.79 00:09:46.099 clat (usec): min=12205, max=20469, avg=15840.13, stdev=1829.49 00:09:46.099 lat (usec): min=12230, max=20492, avg=15962.47, stdev=1820.06 00:09:46.099 clat percentiles (usec): 00:09:46.099 | 1.00th=[12518], 5.00th=[12911], 10.00th=[13304], 20.00th=[13960], 00:09:46.099 | 30.00th=[14746], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:09:46.099 | 70.00th=[16909], 80.00th=[17695], 90.00th=[18482], 95.00th=[18744], 00:09:46.099 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20317], 99.95th=[20579], 00:09:46.099 | 99.99th=[20579] 00:09:46.099 bw ( KiB/s): min=16384, max=16384, per=35.21%, avg=16384.00, stdev= 0.00, samples=2 00:09:46.099 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:46.099 lat (usec) : 500=0.01% 00:09:46.099 lat (msec) : 4=0.05%, 10=0.54%, 20=98.93%, 50=0.47% 00:09:46.099 cpu : usr=3.88%, sys=11.95%, ctx=353, majf=0, minf=7 00:09:46.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:46.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.099 issued rwts: total=3736,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.099 job1: (groupid=0, jobs=1): err= 0: pid=71969: Mon Dec 16 11:29:51 2024 00:09:46.099 read: IOPS=3468, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1003msec) 00:09:46.099 slat (usec): min=6, max=8970, avg=146.40, stdev=704.33 00:09:46.099 clat (usec): min=1788, max=26506, avg=17846.67, stdev=3140.57 00:09:46.099 lat (usec): min=4913, max=27288, avg=17993.06, stdev=3190.40 00:09:46.099 clat percentiles (usec): 00:09:46.099 | 1.00th=[10028], 5.00th=[12649], 10.00th=[13829], 20.00th=[16188], 00:09:46.099 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[18220], 00:09:46.099 | 70.00th=[18482], 80.00th=[19268], 90.00th=[22152], 95.00th=[23725], 00:09:46.099 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:09:46.099 | 99.99th=[26608] 00:09:46.099 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:09:46.099 slat (usec): min=13, max=8342, avg=127.70, stdev=432.28 00:09:46.099 clat (usec): min=10860, max=26442, avg=17992.00, stdev=2244.38 00:09:46.099 lat (usec): min=10893, max=26507, avg=18119.69, stdev=2263.46 00:09:46.099 clat percentiles (usec): 00:09:46.099 | 1.00th=[11338], 5.00th=[13829], 10.00th=[16057], 20.00th=[17171], 00:09:46.099 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:09:46.099 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19792], 95.00th=[22676], 00:09:46.099 | 99.00th=[25035], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:09:46.099 | 99.99th=[26346] 00:09:46.099 bw ( KiB/s): min=13576, max=15126, per=30.84%, avg=14351.00, stdev=1096.02, samples=2 00:09:46.099 iops : min= 3394, max= 3781, avg=3587.50, stdev=273.65, samples=2 00:09:46.099 lat (msec) : 2=0.01%, 10=0.48%, 20=85.84%, 50=13.66% 00:09:46.099 cpu : usr=4.29%, sys=10.88%, ctx=578, majf=0, minf=12 00:09:46.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:46.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.099 issued rwts: total=3479,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.099 job2: (groupid=0, jobs=1): err= 0: pid=71970: Mon Dec 16 11:29:51 2024 00:09:46.099 read: IOPS=1593, BW=6372KiB/s (6525kB/s)(6404KiB/1005msec) 00:09:46.099 slat (usec): min=7, max=14102, avg=303.28, stdev=1528.06 00:09:46.099 clat (usec): min=433, max=49229, avg=37388.02, stdev=6236.77 00:09:46.099 lat (usec): min=12999, max=49244, avg=37691.31, stdev=6065.66 00:09:46.099 clat percentiles (usec): 00:09:46.099 | 1.00th=[13304], 5.00th=[28181], 10.00th=[31065], 20.00th=[32637], 00:09:46.099 | 30.00th=[34866], 40.00th=[36439], 50.00th=[37487], 60.00th=[40109], 00:09:46.099 | 70.00th=[40633], 80.00th=[42206], 90.00th=[43254], 95.00th=[46400], 00:09:46.099 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:09:46.099 | 99.99th=[49021] 00:09:46.099 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:09:46.099 slat (usec): min=14, max=10947, avg=244.51, stdev=1240.03 00:09:46.099 clat (usec): min=19771, max=47698, avg=32157.76, stdev=6321.32 00:09:46.099 lat (usec): min=24743, max=47725, avg=32402.27, stdev=6250.00 00:09:46.099 clat percentiles (usec): 00:09:46.099 | 1.00th=[21627], 5.00th=[25560], 10.00th=[25822], 20.00th=[26608], 00:09:46.099 | 30.00th=[27132], 40.00th=[27919], 50.00th=[29754], 60.00th=[32375], 00:09:46.099 | 70.00th=[35914], 80.00th=[39584], 90.00th=[41157], 95.00th=[43254], 00:09:46.099 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:09:46.099 | 99.99th=[47449] 00:09:46.099 bw ( KiB/s): min= 7688, max= 8208, per=17.08%, avg=7948.00, stdev=367.70, samples=2 00:09:46.099 iops : min= 1922, max= 2052, avg=1987.00, stdev=91.92, samples=2 00:09:46.099 lat (usec) : 500=0.03% 00:09:46.099 lat (msec) : 20=0.93%, 50=99.04% 00:09:46.099 cpu : usr=1.69%, sys=6.27%, ctx=126, majf=0, minf=9 00:09:46.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:46.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.099 issued rwts: total=1601,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.099 job3: (groupid=0, jobs=1): err= 0: pid=71971: Mon Dec 16 11:29:51 2024 00:09:46.099 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:09:46.099 slat (usec): min=5, max=13680, avg=268.00, stdev=1367.25 00:09:46.099 clat (usec): min=23334, max=49872, avg=33845.51, stdev=4657.46 00:09:46.099 lat (usec): min=23350, max=49912, avg=34113.51, stdev=4782.08 00:09:46.099 clat percentiles (usec): 00:09:46.099 | 1.00th=[23725], 5.00th=[28443], 10.00th=[28967], 20.00th=[30278], 00:09:46.100 | 30.00th=[30802], 40.00th=[31589], 50.00th=[32637], 60.00th=[34341], 00:09:46.100 | 70.00th=[35914], 80.00th=[37487], 90.00th=[40633], 95.00th=[42206], 00:09:46.100 | 99.00th=[46400], 99.50th=[46400], 99.90th=[49546], 99.95th=[50070], 00:09:46.100 | 99.99th=[50070] 00:09:46.100 write: IOPS=1952, BW=7809KiB/s (7996kB/s)(7848KiB/1005msec); 0 zone resets 00:09:46.100 slat (usec): min=10, max=9758, avg=292.81, stdev=1204.15 00:09:46.100 clat (usec): min=481, max=55419, avg=37939.95, stdev=11238.48 00:09:46.100 lat (usec): min=6945, max=55451, avg=38232.76, stdev=11307.45 00:09:46.100 clat percentiles (usec): 00:09:46.100 | 1.00th=[ 7504], 5.00th=[21890], 10.00th=[26870], 20.00th=[29754], 00:09:46.100 | 30.00th=[30540], 40.00th=[32113], 50.00th=[37487], 60.00th=[41681], 00:09:46.100 | 70.00th=[45876], 80.00th=[50594], 90.00th=[53216], 95.00th=[54264], 00:09:46.100 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:09:46.100 | 99.99th=[55313] 00:09:46.100 bw ( KiB/s): min= 6480, max= 8208, per=15.78%, avg=7344.00, stdev=1221.88, samples=2 00:09:46.100 iops : min= 1620, max= 2052, avg=1836.00, stdev=305.47, samples=2 00:09:46.100 lat (usec) : 500=0.03% 00:09:46.100 lat (msec) : 10=1.20%, 20=1.20%, 50=85.11%, 100=12.46% 00:09:46.100 cpu : usr=2.09%, sys=5.98%, ctx=208, majf=0, minf=13 00:09:46.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:09:46.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.100 issued rwts: total=1536,1962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.100 00:09:46.100 Run status group 0 (all jobs): 00:09:46.100 READ: bw=40.2MiB/s (42.2MB/s), 6113KiB/s-14.5MiB/s (6260kB/s-15.2MB/s), io=40.4MiB (42.4MB), run=1003-1005msec 00:09:46.100 WRITE: bw=45.4MiB/s (47.6MB/s), 7809KiB/s-15.9MiB/s (7996kB/s-16.7MB/s), io=45.7MiB (47.9MB), run=1003-1005msec 00:09:46.100 00:09:46.100 Disk stats (read/write): 00:09:46.100 nvme0n1: ios=3186/3584, merge=0/0, ticks=12029/12359, in_queue=24388, util=88.37% 00:09:46.100 nvme0n2: ios=2986/3072, merge=0/0, ticks=26003/25259, in_queue=51262, util=88.35% 00:09:46.100 nvme0n3: ios=1553/1600, merge=0/0, ticks=14386/11419, in_queue=25805, util=89.33% 00:09:46.100 nvme0n4: ios=1358/1536, merge=0/0, ticks=14374/19937, in_queue=34311, util=89.47% 00:09:46.100 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:46.100 [global] 00:09:46.100 thread=1 00:09:46.100 invalidate=1 00:09:46.100 rw=randwrite 00:09:46.100 time_based=1 00:09:46.100 runtime=1 00:09:46.100 ioengine=libaio 00:09:46.100 direct=1 00:09:46.100 bs=4096 00:09:46.100 iodepth=128 00:09:46.100 norandommap=0 00:09:46.100 numjobs=1 00:09:46.100 00:09:46.100 verify_dump=1 00:09:46.100 verify_backlog=512 00:09:46.100 verify_state_save=0 00:09:46.100 do_verify=1 00:09:46.100 verify=crc32c-intel 00:09:46.100 [job0] 00:09:46.100 filename=/dev/nvme0n1 00:09:46.100 [job1] 00:09:46.100 filename=/dev/nvme0n2 00:09:46.100 [job2] 00:09:46.100 filename=/dev/nvme0n3 00:09:46.100 [job3] 00:09:46.100 filename=/dev/nvme0n4 00:09:46.100 Could not set queue depth (nvme0n1) 00:09:46.100 Could not set queue depth (nvme0n2) 00:09:46.100 Could not set queue depth (nvme0n3) 00:09:46.100 Could not set queue depth (nvme0n4) 00:09:46.100 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.100 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.100 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.100 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.100 fio-3.35 00:09:46.100 Starting 4 threads 00:09:47.475 00:09:47.475 job0: (groupid=0, jobs=1): err= 0: pid=72031: Mon Dec 16 11:29:52 2024 00:09:47.475 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:09:47.475 slat (usec): min=7, max=9391, avg=157.31, stdev=789.11 00:09:47.475 clat (usec): min=9888, max=30232, avg=19627.96, stdev=3033.60 00:09:47.475 lat (usec): min=10786, max=30269, avg=19785.27, stdev=3092.97 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[12256], 5.00th=[13829], 10.00th=[15664], 20.00th=[18220], 00:09:47.475 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:09:47.475 | 70.00th=[20579], 80.00th=[21365], 90.00th=[23462], 95.00th=[25297], 00:09:47.475 | 99.00th=[27919], 99.50th=[28443], 99.90th=[28967], 99.95th=[29492], 00:09:47.475 | 99.99th=[30278] 00:09:47.475 write: IOPS=3506, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1008msec); 0 zone resets 00:09:47.475 slat (usec): min=12, max=9257, avg=137.31, stdev=558.22 00:09:47.475 clat (usec): min=7262, max=29023, avg=18958.60, stdev=2831.69 00:09:47.475 lat (usec): min=8247, max=29983, avg=19095.90, stdev=2874.83 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[11076], 5.00th=[14091], 10.00th=[16057], 20.00th=[17433], 00:09:47.475 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18744], 60.00th=[19268], 00:09:47.475 | 70.00th=[20055], 80.00th=[20841], 90.00th=[21365], 95.00th=[23987], 00:09:47.475 | 99.00th=[27657], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:09:47.475 | 99.99th=[28967] 00:09:47.475 bw ( KiB/s): min=13320, max=13944, per=26.24%, avg=13632.00, stdev=441.23, samples=2 00:09:47.475 iops : min= 3330, max= 3486, avg=3408.00, stdev=110.31, samples=2 00:09:47.475 lat (msec) : 10=0.27%, 20=65.01%, 50=34.72% 00:09:47.475 cpu : usr=4.87%, sys=9.43%, ctx=446, majf=0, minf=5 00:09:47.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:47.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.475 issued rwts: total=3072,3535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.475 job1: (groupid=0, jobs=1): err= 0: pid=72032: Mon Dec 16 11:29:52 2024 00:09:47.475 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:47.475 slat (usec): min=5, max=6693, avg=151.93, stdev=808.93 00:09:47.475 clat (usec): min=12684, max=28207, avg=19437.19, stdev=2035.51 00:09:47.475 lat (usec): min=12704, max=28239, avg=19589.12, stdev=2104.19 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[14353], 5.00th=[16057], 10.00th=[16581], 20.00th=[17957], 00:09:47.475 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[20055], 00:09:47.475 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21627], 95.00th=[22152], 00:09:47.475 | 99.00th=[25297], 99.50th=[26084], 99.90th=[27132], 99.95th=[27919], 00:09:47.475 | 99.99th=[28181] 00:09:47.475 write: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1003msec); 0 zone resets 00:09:47.475 slat (usec): min=11, max=6323, avg=141.16, stdev=646.76 00:09:47.475 clat (usec): min=2707, max=28195, avg=18850.78, stdev=2794.73 00:09:47.475 lat (usec): min=2726, max=28214, avg=18991.94, stdev=2781.74 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[ 9372], 5.00th=[14222], 10.00th=[16319], 20.00th=[17171], 00:09:47.475 | 30.00th=[17957], 40.00th=[18482], 50.00th=[19268], 60.00th=[19530], 00:09:47.475 | 70.00th=[20055], 80.00th=[20317], 90.00th=[21627], 95.00th=[22676], 00:09:47.475 | 99.00th=[26084], 99.50th=[26608], 99.90th=[27657], 99.95th=[28181], 00:09:47.475 | 99.99th=[28181] 00:09:47.475 bw ( KiB/s): min=13640, max=13648, per=26.26%, avg=13644.00, stdev= 5.66, samples=2 00:09:47.475 iops : min= 3410, max= 3412, avg=3411.00, stdev= 1.41, samples=2 00:09:47.475 lat (msec) : 4=0.33%, 10=0.48%, 20=65.39%, 50=33.80% 00:09:47.475 cpu : usr=2.99%, sys=11.78%, ctx=327, majf=0, minf=7 00:09:47.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:47.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.475 issued rwts: total=3072,3538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.475 job2: (groupid=0, jobs=1): err= 0: pid=72033: Mon Dec 16 11:29:52 2024 00:09:47.475 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:09:47.475 slat (usec): min=7, max=10381, avg=179.12, stdev=919.67 00:09:47.475 clat (usec): min=12777, max=34272, avg=22851.02, stdev=3054.28 00:09:47.475 lat (usec): min=13529, max=36865, avg=23030.14, stdev=3146.43 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[14877], 5.00th=[17957], 10.00th=[19268], 20.00th=[20579], 00:09:47.475 | 30.00th=[21627], 40.00th=[22414], 50.00th=[22676], 60.00th=[23462], 00:09:47.475 | 70.00th=[23987], 80.00th=[25035], 90.00th=[26608], 95.00th=[27919], 00:09:47.475 | 99.00th=[32113], 99.50th=[32900], 99.90th=[33424], 99.95th=[33424], 00:09:47.475 | 99.99th=[34341] 00:09:47.475 write: IOPS=3024, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1010msec); 0 zone resets 00:09:47.475 slat (usec): min=10, max=11446, avg=168.52, stdev=969.55 00:09:47.475 clat (usec): min=8729, max=35608, avg=22444.45, stdev=3025.55 00:09:47.475 lat (usec): min=9998, max=35648, avg=22612.97, stdev=3159.11 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[12387], 5.00th=[17433], 10.00th=[19792], 20.00th=[20841], 00:09:47.475 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22676], 00:09:47.475 | 70.00th=[23462], 80.00th=[24511], 90.00th=[25560], 95.00th=[27132], 00:09:47.475 | 99.00th=[32637], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:09:47.475 | 99.99th=[35390] 00:09:47.475 bw ( KiB/s): min=11136, max=12288, per=22.54%, avg=11712.00, stdev=814.59, samples=2 00:09:47.475 iops : min= 2784, max= 3072, avg=2928.00, stdev=203.65, samples=2 00:09:47.475 lat (msec) : 10=0.02%, 20=12.63%, 50=87.36% 00:09:47.475 cpu : usr=3.77%, sys=8.13%, ctx=266, majf=0, minf=5 00:09:47.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:47.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.475 issued rwts: total=2560,3055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.475 job3: (groupid=0, jobs=1): err= 0: pid=72034: Mon Dec 16 11:29:52 2024 00:09:47.475 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:47.475 slat (usec): min=8, max=6128, avg=180.78, stdev=894.60 00:09:47.475 clat (usec): min=15638, max=29477, avg=23001.50, stdev=2182.94 00:09:47.475 lat (usec): min=16071, max=29492, avg=23182.28, stdev=2032.15 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[16450], 5.00th=[19006], 10.00th=[20317], 20.00th=[20579], 00:09:47.475 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:09:47.475 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:09:47.475 | 99.00th=[27919], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:09:47.475 | 99.99th=[29492] 00:09:47.475 write: IOPS=2977, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1005msec); 0 zone resets 00:09:47.475 slat (usec): min=11, max=6252, avg=170.04, stdev=649.36 00:09:47.475 clat (usec): min=4356, max=29818, avg=22574.08, stdev=3280.42 00:09:47.475 lat (usec): min=5222, max=29847, avg=22744.12, stdev=3257.00 00:09:47.475 clat percentiles (usec): 00:09:47.475 | 1.00th=[ 8979], 5.00th=[17957], 10.00th=[19006], 20.00th=[20317], 00:09:47.475 | 30.00th=[21103], 40.00th=[22414], 50.00th=[23200], 60.00th=[23725], 00:09:47.475 | 70.00th=[24249], 80.00th=[25035], 90.00th=[26084], 95.00th=[27132], 00:09:47.475 | 99.00th=[28967], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:09:47.475 | 99.99th=[29754] 00:09:47.475 bw ( KiB/s): min=10632, max=12288, per=22.06%, avg=11460.00, stdev=1170.97, samples=2 00:09:47.475 iops : min= 2658, max= 3072, avg=2865.00, stdev=292.74, samples=2 00:09:47.475 lat (msec) : 10=0.65%, 20=11.94%, 50=87.41% 00:09:47.475 cpu : usr=3.78%, sys=9.06%, ctx=348, majf=0, minf=6 00:09:47.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:47.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.475 issued rwts: total=2560,2992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.475 00:09:47.475 Run status group 0 (all jobs): 00:09:47.476 READ: bw=43.6MiB/s (45.7MB/s), 9.90MiB/s-12.0MiB/s (10.4MB/s-12.5MB/s), io=44.0MiB (46.1MB), run=1003-1010msec 00:09:47.476 WRITE: bw=50.7MiB/s (53.2MB/s), 11.6MiB/s-13.8MiB/s (12.2MB/s-14.4MB/s), io=51.2MiB (53.7MB), run=1003-1010msec 00:09:47.476 00:09:47.476 Disk stats (read/write): 00:09:47.476 nvme0n1: ios=2610/3071, merge=0/0, ticks=24284/26104, in_queue=50388, util=87.56% 00:09:47.476 nvme0n2: ios=2642/3072, merge=0/0, ticks=15687/16874, in_queue=32561, util=88.96% 00:09:47.476 nvme0n3: ios=2054/2560, merge=0/0, ticks=23545/25972, in_queue=49517, util=88.74% 00:09:47.476 nvme0n4: ios=2048/2552, merge=0/0, ticks=11660/13733, in_queue=25393, util=89.63% 00:09:47.476 11:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:47.476 11:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=72047 00:09:47.476 11:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:47.476 11:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:47.476 [global] 00:09:47.476 thread=1 00:09:47.476 invalidate=1 00:09:47.476 rw=read 00:09:47.476 time_based=1 00:09:47.476 runtime=10 00:09:47.476 ioengine=libaio 00:09:47.476 direct=1 00:09:47.476 bs=4096 00:09:47.476 iodepth=1 00:09:47.476 norandommap=1 00:09:47.476 numjobs=1 00:09:47.476 00:09:47.476 [job0] 00:09:47.476 filename=/dev/nvme0n1 00:09:47.476 [job1] 00:09:47.476 filename=/dev/nvme0n2 00:09:47.476 [job2] 00:09:47.476 filename=/dev/nvme0n3 00:09:47.476 [job3] 00:09:47.476 filename=/dev/nvme0n4 00:09:47.476 Could not set queue depth (nvme0n1) 00:09:47.476 Could not set queue depth (nvme0n2) 00:09:47.476 Could not set queue depth (nvme0n3) 00:09:47.476 Could not set queue depth (nvme0n4) 00:09:47.476 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.476 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.476 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.476 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.476 fio-3.35 00:09:47.476 Starting 4 threads 00:09:50.791 11:29:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:50.791 fio: pid=72090, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.791 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36057088, buflen=4096 00:09:50.791 11:29:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:50.791 fio: pid=72089, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.791 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45285376, buflen=4096 00:09:50.792 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.792 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:51.051 fio: pid=72087, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.051 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=38084608, buflen=4096 00:09:51.309 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.309 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:51.567 fio: pid=72088, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.567 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58028032, buflen=4096 00:09:51.567 00:09:51.568 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72087: Mon Dec 16 11:29:56 2024 00:09:51.568 read: IOPS=2617, BW=10.2MiB/s (10.7MB/s)(36.3MiB/3552msec) 00:09:51.568 slat (usec): min=8, max=14475, avg=23.51, stdev=210.27 00:09:51.568 clat (usec): min=137, max=6152, avg=356.51, stdev=132.83 00:09:51.568 lat (usec): min=153, max=14676, avg=380.01, stdev=247.70 00:09:51.568 clat percentiles (usec): 00:09:51.568 | 1.00th=[ 159], 5.00th=[ 186], 10.00th=[ 210], 20.00th=[ 262], 00:09:51.568 | 30.00th=[ 310], 40.00th=[ 343], 50.00th=[ 367], 60.00th=[ 392], 00:09:51.568 | 70.00th=[ 412], 80.00th=[ 433], 90.00th=[ 465], 95.00th=[ 486], 00:09:51.568 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 1074], 99.95th=[ 2278], 00:09:51.568 | 99.99th=[ 6128] 00:09:51.568 bw ( KiB/s): min= 9112, max= 9792, per=21.15%, avg=9529.33, stdev=283.04, samples=6 00:09:51.568 iops : min= 2278, max= 2448, avg=2382.33, stdev=70.76, samples=6 00:09:51.568 lat (usec) : 250=17.80%, 500=78.75%, 750=3.25%, 1000=0.09% 00:09:51.568 lat (msec) : 2=0.04%, 4=0.05%, 10=0.01% 00:09:51.568 cpu : usr=1.04%, sys=4.34%, ctx=9305, majf=0, minf=1 00:09:51.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 issued rwts: total=9299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.568 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72088: Mon Dec 16 11:29:56 2024 00:09:51.568 read: IOPS=3683, BW=14.4MiB/s (15.1MB/s)(55.3MiB/3846msec) 00:09:51.568 slat (usec): min=13, max=15646, avg=24.43, stdev=224.74 00:09:51.568 clat (usec): min=119, max=3959, avg=245.21, stdev=64.60 00:09:51.568 lat (usec): min=173, max=15951, avg=269.64, stdev=234.70 00:09:51.568 clat percentiles (usec): 00:09:51.568 | 1.00th=[ 167], 5.00th=[ 188], 10.00th=[ 204], 20.00th=[ 217], 00:09:51.568 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 249], 00:09:51.568 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 314], 00:09:51.568 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 685], 99.95th=[ 1319], 00:09:51.568 | 99.99th=[ 2474] 00:09:51.568 bw ( KiB/s): min=12431, max=15560, per=32.36%, avg=14579.29, stdev=1166.66, samples=7 00:09:51.568 iops : min= 3107, max= 3890, avg=3644.71, stdev=291.90, samples=7 00:09:51.568 lat (usec) : 250=62.13%, 500=37.70%, 750=0.07%, 1000=0.03% 00:09:51.568 lat (msec) : 2=0.02%, 4=0.04% 00:09:51.568 cpu : usr=1.04%, sys=5.90%, ctx=14190, majf=0, minf=2 00:09:51.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 issued rwts: total=14168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.568 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72089: Mon Dec 16 11:29:56 2024 00:09:51.568 read: IOPS=3364, BW=13.1MiB/s (13.8MB/s)(43.2MiB/3286msec) 00:09:51.568 slat (usec): min=12, max=11817, avg=19.23, stdev=130.61 00:09:51.568 clat (usec): min=154, max=2799, avg=276.12, stdev=95.95 00:09:51.568 lat (usec): min=168, max=11996, avg=295.35, stdev=161.88 00:09:51.568 clat percentiles (usec): 00:09:51.568 | 1.00th=[ 176], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 215], 00:09:51.568 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 262], 00:09:51.568 | 70.00th=[ 293], 80.00th=[ 334], 90.00th=[ 396], 95.00th=[ 437], 00:09:51.568 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 1029], 99.95th=[ 1745], 00:09:51.568 | 99.99th=[ 2573] 00:09:51.568 bw ( KiB/s): min= 9712, max=15760, per=29.92%, avg=13480.00, stdev=2903.93, samples=6 00:09:51.568 iops : min= 2428, max= 3940, avg=3370.00, stdev=725.98, samples=6 00:09:51.568 lat (usec) : 250=52.73%, 500=45.97%, 750=1.04%, 1000=0.14% 00:09:51.568 lat (msec) : 2=0.07%, 4=0.05% 00:09:51.568 cpu : usr=1.22%, sys=4.66%, ctx=11066, majf=0, minf=2 00:09:51.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 issued rwts: total=11057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.568 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72090: Mon Dec 16 11:29:56 2024 00:09:51.568 read: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(34.4MiB/2986msec) 00:09:51.568 slat (nsec): min=8609, max=58470, avg=14873.41, stdev=3511.55 00:09:51.568 clat (usec): min=155, max=2298, avg=322.49, stdev=102.95 00:09:51.568 lat (usec): min=169, max=2311, avg=337.37, stdev=102.39 00:09:51.568 clat percentiles (usec): 00:09:51.568 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 221], 00:09:51.568 | 30.00th=[ 245], 40.00th=[ 273], 50.00th=[ 314], 60.00th=[ 363], 00:09:51.568 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[ 478], 00:09:51.568 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 635], 99.95th=[ 644], 00:09:51.568 | 99.99th=[ 2311] 00:09:51.568 bw ( KiB/s): min= 9568, max=15904, per=27.27%, avg=12286.40, stdev=3271.34, samples=5 00:09:51.568 iops : min= 2392, max= 3976, avg=3071.60, stdev=817.83, samples=5 00:09:51.568 lat (usec) : 250=32.50%, 500=65.07%, 750=2.40% 00:09:51.568 lat (msec) : 2=0.01%, 4=0.01% 00:09:51.568 cpu : usr=0.97%, sys=3.62%, ctx=8805, majf=0, minf=2 00:09:51.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.568 issued rwts: total=8804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.568 00:09:51.568 Run status group 0 (all jobs): 00:09:51.568 READ: bw=44.0MiB/s (46.1MB/s), 10.2MiB/s-14.4MiB/s (10.7MB/s-15.1MB/s), io=169MiB (177MB), run=2986-3846msec 00:09:51.568 00:09:51.568 Disk stats (read/write): 00:09:51.568 nvme0n1: ios=8493/0, merge=0/0, ticks=3157/0, in_queue=3157, util=95.05% 00:09:51.568 nvme0n2: ios=13105/0, merge=0/0, ticks=3347/0, in_queue=3347, util=95.10% 00:09:51.568 nvme0n3: ios=10472/0, merge=0/0, ticks=2957/0, in_queue=2957, util=96.27% 00:09:51.568 nvme0n4: ios=8542/0, merge=0/0, ticks=2722/0, in_queue=2722, util=96.69% 00:09:51.568 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.568 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:51.827 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.827 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:52.085 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.085 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:52.344 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.344 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:52.603 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.603 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:52.861 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:52.861 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 72047 00:09:52.861 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:52.861 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:53.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:53.120 nvmf hotplug test: fio failed as expected 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:53.120 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.378 rmmod nvme_tcp 00:09:53.378 rmmod nvme_fabrics 00:09:53.378 rmmod nvme_keyring 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 71559 ']' 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 71559 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 71559 ']' 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 71559 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71559 00:09:53.378 killing process with pid 71559 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71559' 00:09:53.378 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 71559 00:09:53.379 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 71559 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:53.636 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:53.637 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:53.637 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:53.637 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.637 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:53.637 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:53.637 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:53.637 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:53.637 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:53.637 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:53.895 00:09:53.895 real 0m20.593s 00:09:53.895 user 1m18.575s 00:09:53.895 sys 0m8.845s 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.895 ************************************ 00:09:53.895 END TEST nvmf_fio_target 00:09:53.895 ************************************ 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.895 ************************************ 00:09:53.895 START TEST nvmf_bdevio 00:09:53.895 ************************************ 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.895 * Looking for test storage... 00:09:53.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:53.895 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.154 --rc genhtml_branch_coverage=1 00:09:54.154 --rc genhtml_function_coverage=1 00:09:54.154 --rc genhtml_legend=1 00:09:54.154 --rc geninfo_all_blocks=1 00:09:54.154 --rc geninfo_unexecuted_blocks=1 00:09:54.154 00:09:54.154 ' 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.154 --rc genhtml_branch_coverage=1 00:09:54.154 --rc genhtml_function_coverage=1 00:09:54.154 --rc genhtml_legend=1 00:09:54.154 --rc geninfo_all_blocks=1 00:09:54.154 --rc geninfo_unexecuted_blocks=1 00:09:54.154 00:09:54.154 ' 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.154 --rc genhtml_branch_coverage=1 00:09:54.154 --rc genhtml_function_coverage=1 00:09:54.154 --rc genhtml_legend=1 00:09:54.154 --rc geninfo_all_blocks=1 00:09:54.154 --rc geninfo_unexecuted_blocks=1 00:09:54.154 00:09:54.154 ' 00:09:54.154 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.154 --rc genhtml_branch_coverage=1 00:09:54.154 --rc genhtml_function_coverage=1 00:09:54.154 --rc genhtml_legend=1 00:09:54.154 --rc geninfo_all_blocks=1 00:09:54.154 --rc geninfo_unexecuted_blocks=1 00:09:54.154 00:09:54.154 ' 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:54.155 Cannot find device "nvmf_init_br" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:54.155 Cannot find device "nvmf_init_br2" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:54.155 Cannot find device "nvmf_tgt_br" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.155 Cannot find device "nvmf_tgt_br2" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:54.155 Cannot find device "nvmf_init_br" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:54.155 Cannot find device "nvmf_init_br2" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:54.155 Cannot find device "nvmf_tgt_br" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:54.155 Cannot find device "nvmf_tgt_br2" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:54.155 Cannot find device "nvmf_br" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:54.155 Cannot find device "nvmf_init_if" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:54.155 Cannot find device "nvmf_init_if2" 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:54.155 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:54.156 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:09:54.416 00:09:54.416 --- 10.0.0.3 ping statistics --- 00:09:54.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.416 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.416 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.416 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:09:54.416 00:09:54.416 --- 10.0.0.4 ping statistics --- 00:09:54.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.416 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:54.416 00:09:54.416 --- 10.0.0.1 ping statistics --- 00:09:54.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.416 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:54.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:09:54.416 00:09:54.416 --- 10.0.0.2 ping statistics --- 00:09:54.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.416 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=72482 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 72482 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 72482 ']' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.416 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.674 [2024-12-16 11:29:59.828503] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:09:54.674 [2024-12-16 11:29:59.828598] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.674 [2024-12-16 11:29:59.980242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.674 [2024-12-16 11:30:00.056559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.674 [2024-12-16 11:30:00.056632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.674 [2024-12-16 11:30:00.056654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.674 [2024-12-16 11:30:00.056665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.674 [2024-12-16 11:30:00.056675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.674 [2024-12-16 11:30:00.058334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.674 [2024-12-16 11:30:00.058522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:54.674 [2024-12-16 11:30:00.058644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:54.674 [2024-12-16 11:30:00.058648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.932 [2024-12-16 11:30:00.239280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.932 Malloc0 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.932 [2024-12-16 11:30:00.307816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.932 { 00:09:54.932 "params": { 00:09:54.932 "name": "Nvme$subsystem", 00:09:54.932 "trtype": "$TEST_TRANSPORT", 00:09:54.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.932 "adrfam": "ipv4", 00:09:54.932 "trsvcid": "$NVMF_PORT", 00:09:54.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.932 "hdgst": ${hdgst:-false}, 00:09:54.932 "ddgst": ${ddgst:-false} 00:09:54.932 }, 00:09:54.932 "method": "bdev_nvme_attach_controller" 00:09:54.932 } 00:09:54.932 EOF 00:09:54.932 )") 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:54.932 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.932 "params": { 00:09:54.932 "name": "Nvme1", 00:09:54.932 "trtype": "tcp", 00:09:54.932 "traddr": "10.0.0.3", 00:09:54.932 "adrfam": "ipv4", 00:09:54.932 "trsvcid": "4420", 00:09:54.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.932 "hdgst": false, 00:09:54.932 "ddgst": false 00:09:54.932 }, 00:09:54.932 "method": "bdev_nvme_attach_controller" 00:09:54.933 }' 00:09:55.190 [2024-12-16 11:30:00.372478] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:09:55.190 [2024-12-16 11:30:00.372581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72517 ] 00:09:55.190 [2024-12-16 11:30:00.525697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.190 [2024-12-16 11:30:00.601582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.447 [2024-12-16 11:30:00.601665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.447 [2024-12-16 11:30:00.601674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.447 I/O targets: 00:09:55.447 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:55.447 00:09:55.448 00:09:55.448 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.448 http://cunit.sourceforge.net/ 00:09:55.448 00:09:55.448 00:09:55.448 Suite: bdevio tests on: Nvme1n1 00:09:55.448 Test: blockdev write read block ...passed 00:09:55.705 Test: blockdev write zeroes read block ...passed 00:09:55.705 Test: blockdev write zeroes read no split ...passed 00:09:55.705 Test: blockdev write zeroes read split ...passed 00:09:55.705 Test: blockdev write zeroes read split partial ...passed 00:09:55.705 Test: blockdev reset ...[2024-12-16 11:30:00.913659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:55.705 [2024-12-16 11:30:00.914151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a9f70 (9): Bad file descriptor 00:09:55.705 [2024-12-16 11:30:00.934370] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:55.705 passed 00:09:55.705 Test: blockdev write read 8 blocks ...passed 00:09:55.705 Test: blockdev write read size > 128k ...passed 00:09:55.705 Test: blockdev write read invalid size ...passed 00:09:55.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:55.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:55.705 Test: blockdev write read max offset ...passed 00:09:55.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:55.705 Test: blockdev writev readv 8 blocks ...passed 00:09:55.705 Test: blockdev writev readv 30 x 1block ...passed 00:09:55.705 Test: blockdev writev readv block ...passed 00:09:55.705 Test: blockdev writev readv size > 128k ...passed 00:09:55.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:55.705 Test: blockdev comparev and writev ...[2024-12-16 11:30:01.107156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.705 [2024-12-16 11:30:01.107244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:55.705 [2024-12-16 11:30:01.107276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.705 [2024-12-16 11:30:01.107296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:55.706 [2024-12-16 11:30:01.107668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.706 [2024-12-16 11:30:01.107709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:55.706 [2024-12-16 11:30:01.107742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.706 [2024-12-16 11:30:01.107764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:55.706 [2024-12-16 11:30:01.108089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.706 [2024-12-16 11:30:01.108128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:55.706 [2024-12-16 11:30:01.108167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.706 [2024-12-16 11:30:01.108189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:55.706 [2024-12-16 11:30:01.108578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.706 [2024-12-16 11:30:01.108629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:55.706 [2024-12-16 11:30:01.108666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.706 [2024-12-16 11:30:01.108688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:55.963 passed 00:09:55.963 Test: blockdev nvme passthru rw ...passed 00:09:55.963 Test: blockdev nvme passthru vendor specific ...[2024-12-16 11:30:01.190637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.963 [2024-12-16 11:30:01.190724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:55.963 passed 00:09:55.963 Test: blockdev nvme admin passthru ...[2024-12-16 11:30:01.190901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.963 [2024-12-16 11:30:01.190940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:55.963 [2024-12-16 11:30:01.191125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.963 [2024-12-16 11:30:01.191157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:55.963 [2024-12-16 11:30:01.191309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.963 [2024-12-16 11:30:01.191339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:55.963 passed 00:09:55.963 Test: blockdev copy ...passed 00:09:55.963 00:09:55.963 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.963 suites 1 1 n/a 0 0 00:09:55.963 tests 23 23 23 0 0 00:09:55.963 asserts 152 152 152 0 n/a 00:09:55.963 00:09:55.963 Elapsed time = 0.894 seconds 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.220 rmmod nvme_tcp 00:09:56.220 rmmod nvme_fabrics 00:09:56.220 rmmod nvme_keyring 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 72482 ']' 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 72482 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 72482 ']' 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 72482 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72482 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:56.220 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:56.220 killing process with pid 72482 00:09:56.221 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72482' 00:09:56.221 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 72482 00:09:56.221 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 72482 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:56.478 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:56.735 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:56.735 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:56.735 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:56.735 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:56.735 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:56.735 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:56.735 00:09:56.735 real 0m2.876s 00:09:56.735 user 0m9.018s 00:09:56.735 sys 0m0.908s 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.735 ************************************ 00:09:56.735 END TEST nvmf_bdevio 00:09:56.735 ************************************ 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:56.735 00:09:56.735 real 3m34.701s 00:09:56.735 user 11m17.362s 00:09:56.735 sys 1m2.060s 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.735 11:30:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 ************************************ 00:09:56.735 END TEST nvmf_target_core 00:09:56.735 ************************************ 00:09:56.993 11:30:02 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:56.993 11:30:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.993 11:30:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.993 11:30:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.993 ************************************ 00:09:56.993 START TEST nvmf_target_extra 00:09:56.993 ************************************ 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:56.993 * Looking for test storage... 00:09:56.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:56.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.993 --rc genhtml_branch_coverage=1 00:09:56.993 --rc genhtml_function_coverage=1 00:09:56.993 --rc genhtml_legend=1 00:09:56.993 --rc geninfo_all_blocks=1 00:09:56.993 --rc geninfo_unexecuted_blocks=1 00:09:56.993 00:09:56.993 ' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:56.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.993 --rc genhtml_branch_coverage=1 00:09:56.993 --rc genhtml_function_coverage=1 00:09:56.993 --rc genhtml_legend=1 00:09:56.993 --rc geninfo_all_blocks=1 00:09:56.993 --rc geninfo_unexecuted_blocks=1 00:09:56.993 00:09:56.993 ' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:56.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.993 --rc genhtml_branch_coverage=1 00:09:56.993 --rc genhtml_function_coverage=1 00:09:56.993 --rc genhtml_legend=1 00:09:56.993 --rc geninfo_all_blocks=1 00:09:56.993 --rc geninfo_unexecuted_blocks=1 00:09:56.993 00:09:56.993 ' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:56.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.993 --rc genhtml_branch_coverage=1 00:09:56.993 --rc genhtml_function_coverage=1 00:09:56.993 --rc genhtml_legend=1 00:09:56.993 --rc geninfo_all_blocks=1 00:09:56.993 --rc geninfo_unexecuted_blocks=1 00:09:56.993 00:09:56.993 ' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:56.993 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.994 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:56.994 ************************************ 00:09:56.994 START TEST nvmf_example 00:09:56.994 ************************************ 00:09:56.994 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:57.252 * Looking for test storage... 00:09:57.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.252 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.252 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.252 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.252 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.252 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.252 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.252 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.253 --rc genhtml_branch_coverage=1 00:09:57.253 --rc genhtml_function_coverage=1 00:09:57.253 --rc genhtml_legend=1 00:09:57.253 --rc geninfo_all_blocks=1 00:09:57.253 --rc geninfo_unexecuted_blocks=1 00:09:57.253 00:09:57.253 ' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.253 --rc genhtml_branch_coverage=1 00:09:57.253 --rc genhtml_function_coverage=1 00:09:57.253 --rc genhtml_legend=1 00:09:57.253 --rc geninfo_all_blocks=1 00:09:57.253 --rc geninfo_unexecuted_blocks=1 00:09:57.253 00:09:57.253 ' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.253 --rc genhtml_branch_coverage=1 00:09:57.253 --rc genhtml_function_coverage=1 00:09:57.253 --rc genhtml_legend=1 00:09:57.253 --rc geninfo_all_blocks=1 00:09:57.253 --rc geninfo_unexecuted_blocks=1 00:09:57.253 00:09:57.253 ' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.253 --rc genhtml_branch_coverage=1 00:09:57.253 --rc genhtml_function_coverage=1 00:09:57.253 --rc genhtml_legend=1 00:09:57.253 --rc geninfo_all_blocks=1 00:09:57.253 --rc geninfo_unexecuted_blocks=1 00:09:57.253 00:09:57.253 ' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.253 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.253 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:57.254 Cannot find device "nvmf_init_br" 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:57.254 Cannot find device "nvmf_init_br2" 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:09:57.254 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:57.512 Cannot find device "nvmf_tgt_br" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.512 Cannot find device "nvmf_tgt_br2" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:57.512 Cannot find device "nvmf_init_br" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:57.512 Cannot find device "nvmf_init_br2" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:57.512 Cannot find device "nvmf_tgt_br" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:57.512 Cannot find device "nvmf_tgt_br2" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:57.512 Cannot find device "nvmf_br" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:57.512 Cannot find device "nvmf_init_if" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:57.512 Cannot find device "nvmf_init_if2" 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.512 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.771 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.771 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:57.771 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:57.771 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.771 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:57.771 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.772 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.772 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.772 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:57.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:09:57.772 00:09:57.772 --- 10.0.0.3 ping statistics --- 00:09:57.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.772 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:57.772 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:57.772 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:09:57.772 00:09:57.772 --- 10.0.0.4 ping statistics --- 00:09:57.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.772 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:57.772 00:09:57.772 --- 10.0.0.1 ping statistics --- 00:09:57.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.772 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:57.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:09:57.772 00:09:57.772 --- 10.0.0.2 ping statistics --- 00:09:57.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.772 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=72813 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 72813 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 72813 ']' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.772 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:09:59.146 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:11.351 Initializing NVMe Controllers 00:10:11.351 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.351 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.351 Initialization complete. Launching workers. 00:10:11.351 ======================================================== 00:10:11.351 Latency(us) 00:10:11.351 Device Information : IOPS MiB/s Average min max 00:10:11.351 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14515.58 56.70 4409.47 774.86 21397.87 00:10:11.351 ======================================================== 00:10:11.351 Total : 14515.58 56.70 4409.47 774.86 21397.87 00:10:11.351 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.351 rmmod nvme_tcp 00:10:11.351 rmmod nvme_fabrics 00:10:11.351 rmmod nvme_keyring 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 72813 ']' 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 72813 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 72813 ']' 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 72813 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72813 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72813' 00:10:11.351 killing process with pid 72813 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 72813 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 72813 00:10:11.351 nvmf threads initialize successfully 00:10:11.351 bdev subsystem init successfully 00:10:11.351 created a nvmf target service 00:10:11.351 create targets's poll groups done 00:10:11.351 all subsystems of target started 00:10:11.351 nvmf target is running 00:10:11.351 all subsystems of target stopped 00:10:11.351 destroy targets's poll groups done 00:10:11.351 destroyed the nvmf target service 00:10:11.351 bdev subsystem finish successfully 00:10:11.351 nvmf threads destroy successfully 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:11.351 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.351 00:10:11.351 real 0m12.818s 00:10:11.351 user 0m44.998s 00:10:11.351 sys 0m2.117s 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.351 ************************************ 00:10:11.351 END TEST nvmf_example 00:10:11.351 ************************************ 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.351 11:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.351 ************************************ 00:10:11.352 START TEST nvmf_filesystem 00:10:11.352 ************************************ 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:11.352 * Looking for test storage... 00:10:11.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.352 --rc genhtml_branch_coverage=1 00:10:11.352 --rc genhtml_function_coverage=1 00:10:11.352 --rc genhtml_legend=1 00:10:11.352 --rc geninfo_all_blocks=1 00:10:11.352 --rc geninfo_unexecuted_blocks=1 00:10:11.352 00:10:11.352 ' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.352 --rc genhtml_branch_coverage=1 00:10:11.352 --rc genhtml_function_coverage=1 00:10:11.352 --rc genhtml_legend=1 00:10:11.352 --rc geninfo_all_blocks=1 00:10:11.352 --rc geninfo_unexecuted_blocks=1 00:10:11.352 00:10:11.352 ' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.352 --rc genhtml_branch_coverage=1 00:10:11.352 --rc genhtml_function_coverage=1 00:10:11.352 --rc genhtml_legend=1 00:10:11.352 --rc geninfo_all_blocks=1 00:10:11.352 --rc geninfo_unexecuted_blocks=1 00:10:11.352 00:10:11.352 ' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.352 --rc genhtml_branch_coverage=1 00:10:11.352 --rc genhtml_function_coverage=1 00:10:11.352 --rc genhtml_legend=1 00:10:11.352 --rc geninfo_all_blocks=1 00:10:11.352 --rc geninfo_unexecuted_blocks=1 00:10:11.352 00:10:11.352 ' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:11.352 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:11.353 #define SPDK_CONFIG_H 00:10:11.353 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:11.353 #define SPDK_CONFIG_APPS 1 00:10:11.353 #define SPDK_CONFIG_ARCH native 00:10:11.353 #undef SPDK_CONFIG_ASAN 00:10:11.353 #define SPDK_CONFIG_AVAHI 1 00:10:11.353 #undef SPDK_CONFIG_CET 00:10:11.353 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:11.353 #define SPDK_CONFIG_COVERAGE 1 00:10:11.353 #define SPDK_CONFIG_CROSS_PREFIX 00:10:11.353 #undef SPDK_CONFIG_CRYPTO 00:10:11.353 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:11.353 #undef SPDK_CONFIG_CUSTOMOCF 00:10:11.353 #undef SPDK_CONFIG_DAOS 00:10:11.353 #define SPDK_CONFIG_DAOS_DIR 00:10:11.353 #define SPDK_CONFIG_DEBUG 1 00:10:11.353 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:11.353 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:11.353 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:11.353 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:11.353 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:11.353 #undef SPDK_CONFIG_DPDK_UADK 00:10:11.353 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:11.353 #define SPDK_CONFIG_EXAMPLES 1 00:10:11.353 #undef SPDK_CONFIG_FC 00:10:11.353 #define SPDK_CONFIG_FC_PATH 00:10:11.353 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:11.353 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:11.353 #define SPDK_CONFIG_FSDEV 1 00:10:11.353 #undef SPDK_CONFIG_FUSE 00:10:11.353 #undef SPDK_CONFIG_FUZZER 00:10:11.353 #define SPDK_CONFIG_FUZZER_LIB 00:10:11.353 #define SPDK_CONFIG_GOLANG 1 00:10:11.353 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:11.353 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:11.353 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:11.353 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:11.353 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:11.353 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:11.353 #undef SPDK_CONFIG_HAVE_LZ4 00:10:11.353 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:11.353 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:11.353 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:11.353 #define SPDK_CONFIG_IDXD 1 00:10:11.353 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:11.353 #undef SPDK_CONFIG_IPSEC_MB 00:10:11.353 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:11.353 #define SPDK_CONFIG_ISAL 1 00:10:11.353 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:11.353 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:11.353 #define SPDK_CONFIG_LIBDIR 00:10:11.353 #undef SPDK_CONFIG_LTO 00:10:11.353 #define SPDK_CONFIG_MAX_LCORES 128 00:10:11.353 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:11.353 #define SPDK_CONFIG_NVME_CUSE 1 00:10:11.353 #undef SPDK_CONFIG_OCF 00:10:11.353 #define SPDK_CONFIG_OCF_PATH 00:10:11.353 #define SPDK_CONFIG_OPENSSL_PATH 00:10:11.353 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:11.353 #define SPDK_CONFIG_PGO_DIR 00:10:11.353 #undef SPDK_CONFIG_PGO_USE 00:10:11.353 #define SPDK_CONFIG_PREFIX /usr/local 00:10:11.353 #undef SPDK_CONFIG_RAID5F 00:10:11.353 #undef SPDK_CONFIG_RBD 00:10:11.353 #define SPDK_CONFIG_RDMA 1 00:10:11.353 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:11.353 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:11.353 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:11.353 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:11.353 #define SPDK_CONFIG_SHARED 1 00:10:11.353 #undef SPDK_CONFIG_SMA 00:10:11.353 #define SPDK_CONFIG_TESTS 1 00:10:11.353 #undef SPDK_CONFIG_TSAN 00:10:11.353 #define SPDK_CONFIG_UBLK 1 00:10:11.353 #define SPDK_CONFIG_UBSAN 1 00:10:11.353 #undef SPDK_CONFIG_UNIT_TESTS 00:10:11.353 #undef SPDK_CONFIG_URING 00:10:11.353 #define SPDK_CONFIG_URING_PATH 00:10:11.353 #undef SPDK_CONFIG_URING_ZNS 00:10:11.353 #define SPDK_CONFIG_USDT 1 00:10:11.353 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:11.353 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:11.353 #undef SPDK_CONFIG_VFIO_USER 00:10:11.353 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:11.353 #define SPDK_CONFIG_VHOST 1 00:10:11.353 #define SPDK_CONFIG_VIRTIO 1 00:10:11.353 #undef SPDK_CONFIG_VTUNE 00:10:11.353 #define SPDK_CONFIG_VTUNE_DIR 00:10:11.353 #define SPDK_CONFIG_WERROR 1 00:10:11.353 #define SPDK_CONFIG_WPDK_DIR 00:10:11.353 #undef SPDK_CONFIG_XNVME 00:10:11.353 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.353 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:11.354 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:11.355 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 73089 ]] 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 73089 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.SCTeId 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.SCTeId/tests/target /tmp/spdk.SCTeId 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13975957504 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5593272320 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13975957504 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5593272320 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266286080 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:10:11.356 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=94933684224 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4769095680 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:11.357 * Looking for test storage... 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13975957504 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.357 --rc genhtml_branch_coverage=1 00:10:11.357 --rc genhtml_function_coverage=1 00:10:11.357 --rc genhtml_legend=1 00:10:11.357 --rc geninfo_all_blocks=1 00:10:11.357 --rc geninfo_unexecuted_blocks=1 00:10:11.357 00:10:11.357 ' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.357 --rc genhtml_branch_coverage=1 00:10:11.357 --rc genhtml_function_coverage=1 00:10:11.357 --rc genhtml_legend=1 00:10:11.357 --rc geninfo_all_blocks=1 00:10:11.357 --rc geninfo_unexecuted_blocks=1 00:10:11.357 00:10:11.357 ' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.357 --rc genhtml_branch_coverage=1 00:10:11.357 --rc genhtml_function_coverage=1 00:10:11.357 --rc genhtml_legend=1 00:10:11.357 --rc geninfo_all_blocks=1 00:10:11.357 --rc geninfo_unexecuted_blocks=1 00:10:11.357 00:10:11.357 ' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.357 --rc genhtml_branch_coverage=1 00:10:11.357 --rc genhtml_function_coverage=1 00:10:11.357 --rc genhtml_legend=1 00:10:11.357 --rc geninfo_all_blocks=1 00:10:11.357 --rc geninfo_unexecuted_blocks=1 00:10:11.357 00:10:11.357 ' 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.357 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:11.358 Cannot find device "nvmf_init_br" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:11.358 Cannot find device "nvmf_init_br2" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:11.358 Cannot find device "nvmf_tgt_br" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.358 Cannot find device "nvmf_tgt_br2" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:11.358 Cannot find device "nvmf_init_br" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:11.358 Cannot find device "nvmf_init_br2" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:11.358 Cannot find device "nvmf_tgt_br" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:11.358 Cannot find device "nvmf_tgt_br2" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:11.358 Cannot find device "nvmf_br" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:11.358 Cannot find device "nvmf_init_if" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:11.358 Cannot find device "nvmf_init_if2" 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.358 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:11.359 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:11.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:11.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:10:11.359 00:10:11.359 --- 10.0.0.3 ping statistics --- 00:10:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.359 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:11.359 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:11.359 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:10:11.359 00:10:11.359 --- 10.0.0.4 ping statistics --- 00:10:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.359 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:11.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:11.359 00:10:11.359 --- 10.0.0.1 ping statistics --- 00:10:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.359 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:11.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:11.359 00:10:11.359 --- 10.0.0.2 ping statistics --- 00:10:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.359 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.359 ************************************ 00:10:11.359 START TEST nvmf_filesystem_no_in_capsule 00:10:11.359 ************************************ 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=73289 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 73289 00:10:11.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 73289 ']' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.359 [2024-12-16 11:30:16.265740] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:10:11.359 [2024-12-16 11:30:16.265845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.359 [2024-12-16 11:30:16.421001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.359 [2024-12-16 11:30:16.486266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.359 [2024-12-16 11:30:16.486332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.359 [2024-12-16 11:30:16.486348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.359 [2024-12-16 11:30:16.486358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.359 [2024-12-16 11:30:16.486367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.359 [2024-12-16 11:30:16.487705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.359 [2024-12-16 11:30:16.487863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.359 [2024-12-16 11:30:16.488031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.359 [2024-12-16 11:30:16.488040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.359 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.360 [2024-12-16 11:30:16.673089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.360 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.618 Malloc1 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.618 [2024-12-16 11:30:16.862861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:11.618 { 00:10:11.618 "aliases": [ 00:10:11.618 "bc4f334d-5942-42b8-a3b4-b7527e035ae4" 00:10:11.618 ], 00:10:11.618 "assigned_rate_limits": { 00:10:11.618 "r_mbytes_per_sec": 0, 00:10:11.618 "rw_ios_per_sec": 0, 00:10:11.618 "rw_mbytes_per_sec": 0, 00:10:11.618 "w_mbytes_per_sec": 0 00:10:11.618 }, 00:10:11.618 "block_size": 512, 00:10:11.618 "claim_type": "exclusive_write", 00:10:11.618 "claimed": true, 00:10:11.618 "driver_specific": {}, 00:10:11.618 "memory_domains": [ 00:10:11.618 { 00:10:11.618 "dma_device_id": "system", 00:10:11.618 "dma_device_type": 1 00:10:11.618 }, 00:10:11.618 { 00:10:11.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.618 "dma_device_type": 2 00:10:11.618 } 00:10:11.618 ], 00:10:11.618 "name": "Malloc1", 00:10:11.618 "num_blocks": 1048576, 00:10:11.618 "product_name": "Malloc disk", 00:10:11.618 "supported_io_types": { 00:10:11.618 "abort": true, 00:10:11.618 "compare": false, 00:10:11.618 "compare_and_write": false, 00:10:11.618 "copy": true, 00:10:11.618 "flush": true, 00:10:11.618 "get_zone_info": false, 00:10:11.618 "nvme_admin": false, 00:10:11.618 "nvme_io": false, 00:10:11.618 "nvme_io_md": false, 00:10:11.618 "nvme_iov_md": false, 00:10:11.618 "read": true, 00:10:11.618 "reset": true, 00:10:11.618 "seek_data": false, 00:10:11.618 "seek_hole": false, 00:10:11.618 "unmap": true, 00:10:11.618 "write": true, 00:10:11.618 "write_zeroes": true, 00:10:11.618 "zcopy": true, 00:10:11.618 "zone_append": false, 00:10:11.618 "zone_management": false 00:10:11.618 }, 00:10:11.618 "uuid": "bc4f334d-5942-42b8-a3b4-b7527e035ae4", 00:10:11.618 "zoned": false 00:10:11.618 } 00:10:11.618 ]' 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:11.618 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:11.618 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:11.618 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:11.618 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:11.618 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:11.618 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:11.877 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.877 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:11.877 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.877 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:11.877 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:14.409 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:14.410 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:14.410 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:14.410 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.344 ************************************ 00:10:15.344 START TEST filesystem_ext4 00:10:15.344 ************************************ 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:15.344 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:15.344 mke2fs 1.47.0 (5-Feb-2023) 00:10:15.344 Discarding device blocks: 0/522240 done 00:10:15.344 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:15.344 Filesystem UUID: 6b6bc3bb-9a3c-4ed7-9c31-5e17e138b081 00:10:15.344 Superblock backups stored on blocks: 00:10:15.344 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:15.344 00:10:15.344 Allocating group tables: 0/64 done 00:10:15.344 Writing inode tables: 0/64 done 00:10:15.344 Creating journal (8192 blocks): done 00:10:15.344 Writing superblocks and filesystem accounting information: 0/64 done 00:10:15.344 00:10:15.345 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:15.345 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:20.613 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:20.613 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:20.872 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:20.872 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:20.872 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:20.872 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:20.872 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 73289 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:20.873 ************************************ 00:10:20.873 END TEST filesystem_ext4 00:10:20.873 ************************************ 00:10:20.873 00:10:20.873 real 0m5.693s 00:10:20.873 user 0m0.027s 00:10:20.873 sys 0m0.065s 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.873 ************************************ 00:10:20.873 START TEST filesystem_btrfs 00:10:20.873 ************************************ 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:20.873 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:21.132 btrfs-progs v6.8.1 00:10:21.132 See https://btrfs.readthedocs.io for more information. 00:10:21.132 00:10:21.132 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:21.132 NOTE: several default settings have changed in version 5.15, please make sure 00:10:21.132 this does not affect your deployments: 00:10:21.132 - DUP for metadata (-m dup) 00:10:21.132 - enabled no-holes (-O no-holes) 00:10:21.132 - enabled free-space-tree (-R free-space-tree) 00:10:21.132 00:10:21.132 Label: (null) 00:10:21.132 UUID: 3ef93584-840d-452f-943f-5f13018b8f89 00:10:21.132 Node size: 16384 00:10:21.132 Sector size: 4096 (CPU page size: 4096) 00:10:21.132 Filesystem size: 510.00MiB 00:10:21.132 Block group profiles: 00:10:21.132 Data: single 8.00MiB 00:10:21.132 Metadata: DUP 32.00MiB 00:10:21.132 System: DUP 8.00MiB 00:10:21.132 SSD detected: yes 00:10:21.132 Zoned device: no 00:10:21.132 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:21.132 Checksum: crc32c 00:10:21.132 Number of devices: 1 00:10:21.132 Devices: 00:10:21.132 ID SIZE PATH 00:10:21.132 1 510.00MiB /dev/nvme0n1p1 00:10:21.132 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 73289 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:21.132 ************************************ 00:10:21.132 END TEST filesystem_btrfs 00:10:21.132 ************************************ 00:10:21.132 00:10:21.132 real 0m0.228s 00:10:21.132 user 0m0.023s 00:10:21.132 sys 0m0.061s 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.132 ************************************ 00:10:21.132 START TEST filesystem_xfs 00:10:21.132 ************************************ 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:21.132 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:21.132 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:21.132 = sectsz=512 attr=2, projid32bit=1 00:10:21.132 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:21.132 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:21.132 data = bsize=4096 blocks=130560, imaxpct=25 00:10:21.132 = sunit=0 swidth=0 blks 00:10:21.132 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:21.132 log =internal log bsize=4096 blocks=16384, version=2 00:10:21.132 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:21.132 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:22.068 Discarding blocks...Done. 00:10:22.068 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:22.068 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 73289 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.600 ************************************ 00:10:24.600 END TEST filesystem_xfs 00:10:24.600 ************************************ 00:10:24.600 00:10:24.600 real 0m3.204s 00:10:24.600 user 0m0.020s 00:10:24.600 sys 0m0.055s 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 73289 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 73289 ']' 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 73289 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73289 00:10:24.600 killing process with pid 73289 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73289' 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 73289 00:10:24.600 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 73289 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:24.859 00:10:24.859 real 0m14.012s 00:10:24.859 user 0m53.588s 00:10:24.859 sys 0m1.804s 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.859 ************************************ 00:10:24.859 END TEST nvmf_filesystem_no_in_capsule 00:10:24.859 ************************************ 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 ************************************ 00:10:24.859 START TEST nvmf_filesystem_in_capsule 00:10:24.859 ************************************ 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=73647 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 73647 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 73647 ']' 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.859 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 [2024-12-16 11:30:30.322063] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:10:25.118 [2024-12-16 11:30:30.322162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.118 [2024-12-16 11:30:30.468137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.118 [2024-12-16 11:30:30.526215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.118 [2024-12-16 11:30:30.526516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.118 [2024-12-16 11:30:30.526707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.118 [2024-12-16 11:30:30.526768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.118 [2024-12-16 11:30:30.526863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.118 [2024-12-16 11:30:30.528068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.118 [2024-12-16 11:30:30.528197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.118 [2024-12-16 11:30:30.528328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.118 [2024-12-16 11:30:30.528333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.376 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.376 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:25.376 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:25.376 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.376 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.376 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.376 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:25.377 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:25.377 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.377 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.377 [2024-12-16 11:30:30.701185] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.377 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.377 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:25.377 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.377 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.635 Malloc1 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.635 [2024-12-16 11:30:30.870639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:25.635 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:25.636 { 00:10:25.636 "aliases": [ 00:10:25.636 "7a69195b-eb85-4e02-bc27-dc79dafe0d60" 00:10:25.636 ], 00:10:25.636 "assigned_rate_limits": { 00:10:25.636 "r_mbytes_per_sec": 0, 00:10:25.636 "rw_ios_per_sec": 0, 00:10:25.636 "rw_mbytes_per_sec": 0, 00:10:25.636 "w_mbytes_per_sec": 0 00:10:25.636 }, 00:10:25.636 "block_size": 512, 00:10:25.636 "claim_type": "exclusive_write", 00:10:25.636 "claimed": true, 00:10:25.636 "driver_specific": {}, 00:10:25.636 "memory_domains": [ 00:10:25.636 { 00:10:25.636 "dma_device_id": "system", 00:10:25.636 "dma_device_type": 1 00:10:25.636 }, 00:10:25.636 { 00:10:25.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.636 "dma_device_type": 2 00:10:25.636 } 00:10:25.636 ], 00:10:25.636 "name": "Malloc1", 00:10:25.636 "num_blocks": 1048576, 00:10:25.636 "product_name": "Malloc disk", 00:10:25.636 "supported_io_types": { 00:10:25.636 "abort": true, 00:10:25.636 "compare": false, 00:10:25.636 "compare_and_write": false, 00:10:25.636 "copy": true, 00:10:25.636 "flush": true, 00:10:25.636 "get_zone_info": false, 00:10:25.636 "nvme_admin": false, 00:10:25.636 "nvme_io": false, 00:10:25.636 "nvme_io_md": false, 00:10:25.636 "nvme_iov_md": false, 00:10:25.636 "read": true, 00:10:25.636 "reset": true, 00:10:25.636 "seek_data": false, 00:10:25.636 "seek_hole": false, 00:10:25.636 "unmap": true, 00:10:25.636 "write": true, 00:10:25.636 "write_zeroes": true, 00:10:25.636 "zcopy": true, 00:10:25.636 "zone_append": false, 00:10:25.636 "zone_management": false 00:10:25.636 }, 00:10:25.636 "uuid": "7a69195b-eb85-4e02-bc27-dc79dafe0d60", 00:10:25.636 "zoned": false 00:10:25.636 } 00:10:25.636 ]' 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:25.636 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:25.895 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.895 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:25.895 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.895 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:25.895 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:27.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:28.061 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:28.061 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.017 ************************************ 00:10:29.017 START TEST filesystem_in_capsule_ext4 00:10:29.017 ************************************ 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:29.017 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:29.017 mke2fs 1.47.0 (5-Feb-2023) 00:10:29.274 Discarding device blocks: 0/522240 done 00:10:29.274 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:29.274 Filesystem UUID: 9b1680e8-8e88-4d8e-a93e-02e5a7be0c83 00:10:29.274 Superblock backups stored on blocks: 00:10:29.274 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:29.274 00:10:29.274 Allocating group tables: 0/64 done 00:10:29.274 Writing inode tables: 0/64 done 00:10:29.274 Creating journal (8192 blocks): done 00:10:29.274 Writing superblocks and filesystem accounting information: 0/64 done 00:10:29.274 00:10:29.274 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:29.274 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 73647 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.539 ************************************ 00:10:34.539 END TEST filesystem_in_capsule_ext4 00:10:34.539 ************************************ 00:10:34.539 00:10:34.539 real 0m5.544s 00:10:34.539 user 0m0.027s 00:10:34.539 sys 0m0.056s 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.539 ************************************ 00:10:34.539 START TEST filesystem_in_capsule_btrfs 00:10:34.539 ************************************ 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:34.539 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.540 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:34.540 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:34.540 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:34.540 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:34.540 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:34.798 btrfs-progs v6.8.1 00:10:34.798 See https://btrfs.readthedocs.io for more information. 00:10:34.798 00:10:34.798 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:34.798 NOTE: several default settings have changed in version 5.15, please make sure 00:10:34.798 this does not affect your deployments: 00:10:34.798 - DUP for metadata (-m dup) 00:10:34.798 - enabled no-holes (-O no-holes) 00:10:34.798 - enabled free-space-tree (-R free-space-tree) 00:10:34.798 00:10:34.798 Label: (null) 00:10:34.798 UUID: 5037e7a7-69e8-4954-95aa-229b96ffbf52 00:10:34.798 Node size: 16384 00:10:34.798 Sector size: 4096 (CPU page size: 4096) 00:10:34.798 Filesystem size: 510.00MiB 00:10:34.798 Block group profiles: 00:10:34.798 Data: single 8.00MiB 00:10:34.798 Metadata: DUP 32.00MiB 00:10:34.798 System: DUP 8.00MiB 00:10:34.798 SSD detected: yes 00:10:34.798 Zoned device: no 00:10:34.798 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:34.798 Checksum: crc32c 00:10:34.798 Number of devices: 1 00:10:34.798 Devices: 00:10:34.798 ID SIZE PATH 00:10:34.798 1 510.00MiB /dev/nvme0n1p1 00:10:34.798 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 73647 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.798 ************************************ 00:10:34.798 END TEST filesystem_in_capsule_btrfs 00:10:34.798 ************************************ 00:10:34.798 00:10:34.798 real 0m0.221s 00:10:34.798 user 0m0.025s 00:10:34.798 sys 0m0.054s 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.798 ************************************ 00:10:34.798 START TEST filesystem_in_capsule_xfs 00:10:34.798 ************************************ 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:34.798 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:35.057 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:35.057 = sectsz=512 attr=2, projid32bit=1 00:10:35.057 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:35.057 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:35.057 data = bsize=4096 blocks=130560, imaxpct=25 00:10:35.057 = sunit=0 swidth=0 blks 00:10:35.057 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:35.057 log =internal log bsize=4096 blocks=16384, version=2 00:10:35.057 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:35.057 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:35.624 Discarding blocks...Done. 00:10:35.624 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:35.624 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 73647 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.554 ************************************ 00:10:37.554 END TEST filesystem_in_capsule_xfs 00:10:37.554 ************************************ 00:10:37.554 00:10:37.554 real 0m2.633s 00:10:37.554 user 0m0.021s 00:10:37.554 sys 0m0.058s 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.554 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.813 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 73647 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 73647 ']' 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 73647 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73647 00:10:37.813 killing process with pid 73647 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73647' 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 73647 00:10:37.813 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 73647 00:10:38.072 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:38.072 00:10:38.072 real 0m13.183s 00:10:38.072 user 0m50.203s 00:10:38.072 sys 0m1.965s 00:10:38.072 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.072 ************************************ 00:10:38.072 END TEST nvmf_filesystem_in_capsule 00:10:38.072 ************************************ 00:10:38.072 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.072 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:38.072 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.072 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.331 rmmod nvme_tcp 00:10:38.331 rmmod nvme_fabrics 00:10:38.331 rmmod nvme_keyring 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:38.331 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:10:38.589 00:10:38.589 real 0m28.607s 00:10:38.589 user 1m44.283s 00:10:38.589 sys 0m4.322s 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.589 ************************************ 00:10:38.589 END TEST nvmf_filesystem 00:10:38.589 ************************************ 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.589 ************************************ 00:10:38.589 START TEST nvmf_target_discovery 00:10:38.589 ************************************ 00:10:38.589 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:38.849 * Looking for test storage... 00:10:38.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:38.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.849 --rc genhtml_branch_coverage=1 00:10:38.849 --rc genhtml_function_coverage=1 00:10:38.849 --rc genhtml_legend=1 00:10:38.849 --rc geninfo_all_blocks=1 00:10:38.849 --rc geninfo_unexecuted_blocks=1 00:10:38.849 00:10:38.849 ' 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:38.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.849 --rc genhtml_branch_coverage=1 00:10:38.849 --rc genhtml_function_coverage=1 00:10:38.849 --rc genhtml_legend=1 00:10:38.849 --rc geninfo_all_blocks=1 00:10:38.849 --rc geninfo_unexecuted_blocks=1 00:10:38.849 00:10:38.849 ' 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:38.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.849 --rc genhtml_branch_coverage=1 00:10:38.849 --rc genhtml_function_coverage=1 00:10:38.849 --rc genhtml_legend=1 00:10:38.849 --rc geninfo_all_blocks=1 00:10:38.849 --rc geninfo_unexecuted_blocks=1 00:10:38.849 00:10:38.849 ' 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:38.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.849 --rc genhtml_branch_coverage=1 00:10:38.849 --rc genhtml_function_coverage=1 00:10:38.849 --rc genhtml_legend=1 00:10:38.849 --rc geninfo_all_blocks=1 00:10:38.849 --rc geninfo_unexecuted_blocks=1 00:10:38.849 00:10:38.849 ' 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.849 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:38.850 Cannot find device "nvmf_init_br" 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:38.850 Cannot find device "nvmf_init_br2" 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:38.850 Cannot find device "nvmf_tgt_br" 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.850 Cannot find device "nvmf_tgt_br2" 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:38.850 Cannot find device "nvmf_init_br" 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:38.850 Cannot find device "nvmf_init_br2" 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:38.850 Cannot find device "nvmf_tgt_br" 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:10:38.850 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:39.109 Cannot find device "nvmf_tgt_br2" 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:39.109 Cannot find device "nvmf_br" 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:39.109 Cannot find device "nvmf_init_if" 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:39.109 Cannot find device "nvmf_init_if2" 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:39.109 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:39.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:39.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:10:39.368 00:10:39.368 --- 10.0.0.3 ping statistics --- 00:10:39.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.368 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:39.368 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:39.368 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:10:39.368 00:10:39.368 --- 10.0.0.4 ping statistics --- 00:10:39.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.368 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:39.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:39.368 00:10:39.368 --- 10.0.0.1 ping statistics --- 00:10:39.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.368 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:39.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:39.368 00:10:39.368 --- 10.0.0.2 ping statistics --- 00:10:39.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.368 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=74216 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 74216 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 74216 ']' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.368 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.368 [2024-12-16 11:30:44.679797] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:10:39.368 [2024-12-16 11:30:44.679899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.627 [2024-12-16 11:30:44.836882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.627 [2024-12-16 11:30:44.906162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.627 [2024-12-16 11:30:44.906233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.627 [2024-12-16 11:30:44.906247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.627 [2024-12-16 11:30:44.906258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.627 [2024-12-16 11:30:44.906267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.627 [2024-12-16 11:30:44.907485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.627 [2024-12-16 11:30:44.907550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.627 [2024-12-16 11:30:44.908217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.627 [2024-12-16 11:30:44.908230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 [2024-12-16 11:30:45.095845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 Null1 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 [2024-12-16 11:30:45.144401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 Null2 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 Null3 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.904 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 Null4 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.905 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 4420 00:10:40.163 00:10:40.163 Discovery Log Number of Records 6, Generation counter 6 00:10:40.163 =====Discovery Log Entry 0====== 00:10:40.163 trtype: tcp 00:10:40.163 adrfam: ipv4 00:10:40.163 subtype: current discovery subsystem 00:10:40.163 treq: not required 00:10:40.163 portid: 0 00:10:40.163 trsvcid: 4420 00:10:40.163 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.163 traddr: 10.0.0.3 00:10:40.163 eflags: explicit discovery connections, duplicate discovery information 00:10:40.163 sectype: none 00:10:40.163 =====Discovery Log Entry 1====== 00:10:40.163 trtype: tcp 00:10:40.163 adrfam: ipv4 00:10:40.163 subtype: nvme subsystem 00:10:40.163 treq: not required 00:10:40.163 portid: 0 00:10:40.163 trsvcid: 4420 00:10:40.163 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:40.163 traddr: 10.0.0.3 00:10:40.163 eflags: none 00:10:40.163 sectype: none 00:10:40.163 =====Discovery Log Entry 2====== 00:10:40.163 trtype: tcp 00:10:40.163 adrfam: ipv4 00:10:40.163 subtype: nvme subsystem 00:10:40.163 treq: not required 00:10:40.163 portid: 0 00:10:40.163 trsvcid: 4420 00:10:40.163 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:40.164 traddr: 10.0.0.3 00:10:40.164 eflags: none 00:10:40.164 sectype: none 00:10:40.164 =====Discovery Log Entry 3====== 00:10:40.164 trtype: tcp 00:10:40.164 adrfam: ipv4 00:10:40.164 subtype: nvme subsystem 00:10:40.164 treq: not required 00:10:40.164 portid: 0 00:10:40.164 trsvcid: 4420 00:10:40.164 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:40.164 traddr: 10.0.0.3 00:10:40.164 eflags: none 00:10:40.164 sectype: none 00:10:40.164 =====Discovery Log Entry 4====== 00:10:40.164 trtype: tcp 00:10:40.164 adrfam: ipv4 00:10:40.164 subtype: nvme subsystem 00:10:40.164 treq: not required 00:10:40.164 portid: 0 00:10:40.164 trsvcid: 4420 00:10:40.164 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:40.164 traddr: 10.0.0.3 00:10:40.164 eflags: none 00:10:40.164 sectype: none 00:10:40.164 =====Discovery Log Entry 5====== 00:10:40.164 trtype: tcp 00:10:40.164 adrfam: ipv4 00:10:40.164 subtype: discovery subsystem referral 00:10:40.164 treq: not required 00:10:40.164 portid: 0 00:10:40.164 trsvcid: 4430 00:10:40.164 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.164 traddr: 10.0.0.3 00:10:40.164 eflags: none 00:10:40.164 sectype: none 00:10:40.164 Perform nvmf subsystem discovery via RPC 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.164 [ 00:10:40.164 { 00:10:40.164 "allow_any_host": true, 00:10:40.164 "hosts": [], 00:10:40.164 "listen_addresses": [ 00:10:40.164 { 00:10:40.164 "adrfam": "IPv4", 00:10:40.164 "traddr": "10.0.0.3", 00:10:40.164 "trsvcid": "4420", 00:10:40.164 "trtype": "TCP" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:40.164 "subtype": "Discovery" 00:10:40.164 }, 00:10:40.164 { 00:10:40.164 "allow_any_host": true, 00:10:40.164 "hosts": [], 00:10:40.164 "listen_addresses": [ 00:10:40.164 { 00:10:40.164 "adrfam": "IPv4", 00:10:40.164 "traddr": "10.0.0.3", 00:10:40.164 "trsvcid": "4420", 00:10:40.164 "trtype": "TCP" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "max_cntlid": 65519, 00:10:40.164 "max_namespaces": 32, 00:10:40.164 "min_cntlid": 1, 00:10:40.164 "model_number": "SPDK bdev Controller", 00:10:40.164 "namespaces": [ 00:10:40.164 { 00:10:40.164 "bdev_name": "Null1", 00:10:40.164 "name": "Null1", 00:10:40.164 "nguid": "6900617936004FD18DD2CF37FE4EF7EB", 00:10:40.164 "nsid": 1, 00:10:40.164 "uuid": "69006179-3600-4fd1-8dd2-cf37fe4ef7eb" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.164 "serial_number": "SPDK00000000000001", 00:10:40.164 "subtype": "NVMe" 00:10:40.164 }, 00:10:40.164 { 00:10:40.164 "allow_any_host": true, 00:10:40.164 "hosts": [], 00:10:40.164 "listen_addresses": [ 00:10:40.164 { 00:10:40.164 "adrfam": "IPv4", 00:10:40.164 "traddr": "10.0.0.3", 00:10:40.164 "trsvcid": "4420", 00:10:40.164 "trtype": "TCP" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "max_cntlid": 65519, 00:10:40.164 "max_namespaces": 32, 00:10:40.164 "min_cntlid": 1, 00:10:40.164 "model_number": "SPDK bdev Controller", 00:10:40.164 "namespaces": [ 00:10:40.164 { 00:10:40.164 "bdev_name": "Null2", 00:10:40.164 "name": "Null2", 00:10:40.164 "nguid": "62A751CE229047F5865F7EAD4B8B8607", 00:10:40.164 "nsid": 1, 00:10:40.164 "uuid": "62a751ce-2290-47f5-865f-7ead4b8b8607" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.164 "serial_number": "SPDK00000000000002", 00:10:40.164 "subtype": "NVMe" 00:10:40.164 }, 00:10:40.164 { 00:10:40.164 "allow_any_host": true, 00:10:40.164 "hosts": [], 00:10:40.164 "listen_addresses": [ 00:10:40.164 { 00:10:40.164 "adrfam": "IPv4", 00:10:40.164 "traddr": "10.0.0.3", 00:10:40.164 "trsvcid": "4420", 00:10:40.164 "trtype": "TCP" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "max_cntlid": 65519, 00:10:40.164 "max_namespaces": 32, 00:10:40.164 "min_cntlid": 1, 00:10:40.164 "model_number": "SPDK bdev Controller", 00:10:40.164 "namespaces": [ 00:10:40.164 { 00:10:40.164 "bdev_name": "Null3", 00:10:40.164 "name": "Null3", 00:10:40.164 "nguid": "A981F7A5E17B44AF9FACABC56654CF74", 00:10:40.164 "nsid": 1, 00:10:40.164 "uuid": "a981f7a5-e17b-44af-9fac-abc56654cf74" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:40.164 "serial_number": "SPDK00000000000003", 00:10:40.164 "subtype": "NVMe" 00:10:40.164 }, 00:10:40.164 { 00:10:40.164 "allow_any_host": true, 00:10:40.164 "hosts": [], 00:10:40.164 "listen_addresses": [ 00:10:40.164 { 00:10:40.164 "adrfam": "IPv4", 00:10:40.164 "traddr": "10.0.0.3", 00:10:40.164 "trsvcid": "4420", 00:10:40.164 "trtype": "TCP" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "max_cntlid": 65519, 00:10:40.164 "max_namespaces": 32, 00:10:40.164 "min_cntlid": 1, 00:10:40.164 "model_number": "SPDK bdev Controller", 00:10:40.164 "namespaces": [ 00:10:40.164 { 00:10:40.164 "bdev_name": "Null4", 00:10:40.164 "name": "Null4", 00:10:40.164 "nguid": "7140C46A046E4AD19AE9EC7046E1F06B", 00:10:40.164 "nsid": 1, 00:10:40.164 "uuid": "7140c46a-046e-4ad1-9ae9-ec7046e1f06b" 00:10:40.164 } 00:10:40.164 ], 00:10:40.164 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:40.164 "serial_number": "SPDK00000000000004", 00:10:40.164 "subtype": "NVMe" 00:10:40.164 } 00:10:40.164 ] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.164 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.165 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.165 rmmod nvme_tcp 00:10:40.423 rmmod nvme_fabrics 00:10:40.423 rmmod nvme_keyring 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 74216 ']' 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 74216 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 74216 ']' 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 74216 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74216 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.423 killing process with pid 74216 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74216' 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 74216 00:10:40.423 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 74216 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:40.682 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.682 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.958 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:10:40.958 00:10:40.958 real 0m2.182s 00:10:40.958 user 0m4.079s 00:10:40.958 sys 0m0.723s 00:10:40.958 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.958 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.958 ************************************ 00:10:40.958 END TEST nvmf_target_discovery 00:10:40.958 ************************************ 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.959 ************************************ 00:10:40.959 START TEST nvmf_referrals 00:10:40.959 ************************************ 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.959 * Looking for test storage... 00:10:40.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.959 --rc genhtml_branch_coverage=1 00:10:40.959 --rc genhtml_function_coverage=1 00:10:40.959 --rc genhtml_legend=1 00:10:40.959 --rc geninfo_all_blocks=1 00:10:40.959 --rc geninfo_unexecuted_blocks=1 00:10:40.959 00:10:40.959 ' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.959 --rc genhtml_branch_coverage=1 00:10:40.959 --rc genhtml_function_coverage=1 00:10:40.959 --rc genhtml_legend=1 00:10:40.959 --rc geninfo_all_blocks=1 00:10:40.959 --rc geninfo_unexecuted_blocks=1 00:10:40.959 00:10:40.959 ' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.959 --rc genhtml_branch_coverage=1 00:10:40.959 --rc genhtml_function_coverage=1 00:10:40.959 --rc genhtml_legend=1 00:10:40.959 --rc geninfo_all_blocks=1 00:10:40.959 --rc geninfo_unexecuted_blocks=1 00:10:40.959 00:10:40.959 ' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.959 --rc genhtml_branch_coverage=1 00:10:40.959 --rc genhtml_function_coverage=1 00:10:40.959 --rc genhtml_legend=1 00:10:40.959 --rc geninfo_all_blocks=1 00:10:40.959 --rc geninfo_unexecuted_blocks=1 00:10:40.959 00:10:40.959 ' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.959 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.241 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.241 Cannot find device "nvmf_init_br" 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.241 Cannot find device "nvmf_init_br2" 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.241 Cannot find device "nvmf_tgt_br" 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.241 Cannot find device "nvmf_tgt_br2" 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.241 Cannot find device "nvmf_init_br" 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:41.241 Cannot find device "nvmf_init_br2" 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:41.241 Cannot find device "nvmf_tgt_br" 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:10:41.241 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:41.241 Cannot find device "nvmf_tgt_br2" 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:41.242 Cannot find device "nvmf_br" 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:41.242 Cannot find device "nvmf_init_if" 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:41.242 Cannot find device "nvmf_init_if2" 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:41.242 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:41.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:10:41.501 00:10:41.501 --- 10.0.0.3 ping statistics --- 00:10:41.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.501 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:41.501 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:41.501 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:10:41.501 00:10:41.501 --- 10.0.0.4 ping statistics --- 00:10:41.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.501 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:41.501 00:10:41.501 --- 10.0.0.1 ping statistics --- 00:10:41.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.501 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:41.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:41.501 00:10:41.501 --- 10.0.0.2 ping statistics --- 00:10:41.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.501 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=74481 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 74481 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 74481 ']' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.501 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.501 [2024-12-16 11:30:46.878995] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:10:41.501 [2024-12-16 11:30:46.879128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.760 [2024-12-16 11:30:47.037431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.760 [2024-12-16 11:30:47.103519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.760 [2024-12-16 11:30:47.103590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.760 [2024-12-16 11:30:47.103605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.760 [2024-12-16 11:30:47.103615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.760 [2024-12-16 11:30:47.103624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.760 [2024-12-16 11:30:47.104877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.760 [2024-12-16 11:30:47.105077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.760 [2024-12-16 11:30:47.104956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.760 [2024-12-16 11:30:47.105076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 [2024-12-16 11:30:47.290124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 [2024-12-16 11:30:47.302320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:42.019 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:42.278 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:42.537 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:42.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:42.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:42.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:42.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:42.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:42.796 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:43.055 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:43.314 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.572 rmmod nvme_tcp 00:10:43.572 rmmod nvme_fabrics 00:10:43.572 rmmod nvme_keyring 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 74481 ']' 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 74481 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 74481 ']' 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 74481 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.572 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74481 00:10:43.831 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.831 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.831 killing process with pid 74481 00:10:43.831 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74481' 00:10:43.831 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 74481 00:10:43.831 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 74481 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:43.831 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:10:44.090 00:10:44.090 real 0m3.298s 00:10:44.090 user 0m9.363s 00:10:44.090 sys 0m0.961s 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.090 ************************************ 00:10:44.090 END TEST nvmf_referrals 00:10:44.090 ************************************ 00:10:44.090 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.350 ************************************ 00:10:44.350 START TEST nvmf_connect_disconnect 00:10:44.350 ************************************ 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:44.350 * Looking for test storage... 00:10:44.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.350 --rc genhtml_branch_coverage=1 00:10:44.350 --rc genhtml_function_coverage=1 00:10:44.350 --rc genhtml_legend=1 00:10:44.350 --rc geninfo_all_blocks=1 00:10:44.350 --rc geninfo_unexecuted_blocks=1 00:10:44.350 00:10:44.350 ' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.350 --rc genhtml_branch_coverage=1 00:10:44.350 --rc genhtml_function_coverage=1 00:10:44.350 --rc genhtml_legend=1 00:10:44.350 --rc geninfo_all_blocks=1 00:10:44.350 --rc geninfo_unexecuted_blocks=1 00:10:44.350 00:10:44.350 ' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.350 --rc genhtml_branch_coverage=1 00:10:44.350 --rc genhtml_function_coverage=1 00:10:44.350 --rc genhtml_legend=1 00:10:44.350 --rc geninfo_all_blocks=1 00:10:44.350 --rc geninfo_unexecuted_blocks=1 00:10:44.350 00:10:44.350 ' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.350 --rc genhtml_branch_coverage=1 00:10:44.350 --rc genhtml_function_coverage=1 00:10:44.350 --rc genhtml_legend=1 00:10:44.350 --rc geninfo_all_blocks=1 00:10:44.350 --rc geninfo_unexecuted_blocks=1 00:10:44.350 00:10:44.350 ' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.350 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:44.351 Cannot find device "nvmf_init_br" 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:10:44.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:44.610 Cannot find device "nvmf_init_br2" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:44.611 Cannot find device "nvmf_tgt_br" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.611 Cannot find device "nvmf_tgt_br2" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:44.611 Cannot find device "nvmf_init_br" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:44.611 Cannot find device "nvmf_init_br2" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:44.611 Cannot find device "nvmf_tgt_br" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:44.611 Cannot find device "nvmf_tgt_br2" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:44.611 Cannot find device "nvmf_br" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:44.611 Cannot find device "nvmf_init_if" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:44.611 Cannot find device "nvmf_init_if2" 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:44.611 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:44.611 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:44.611 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:44.611 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:44.611 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.611 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:44.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:10:44.872 00:10:44.872 --- 10.0.0.3 ping statistics --- 00:10:44.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.872 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:44.872 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:44.872 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:44.872 00:10:44.872 --- 10.0.0.4 ping statistics --- 00:10:44.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.872 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:44.872 00:10:44.872 --- 10.0.0.1 ping statistics --- 00:10:44.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.872 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:44.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:10:44.872 00:10:44.872 --- 10.0.0.2 ping statistics --- 00:10:44.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.872 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=74827 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 74827 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 74827 ']' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.872 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.872 [2024-12-16 11:30:50.223266] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:10:44.872 [2024-12-16 11:30:50.223373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.134 [2024-12-16 11:30:50.368717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.134 [2024-12-16 11:30:50.425976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.134 [2024-12-16 11:30:50.426044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.134 [2024-12-16 11:30:50.426057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.134 [2024-12-16 11:30:50.426066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.134 [2024-12-16 11:30:50.426073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.134 [2024-12-16 11:30:50.427204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.134 [2024-12-16 11:30:50.427371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.134 [2024-12-16 11:30:50.427432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.134 [2024-12-16 11:30:50.427438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:45.394 [2024-12-16 11:30:50.597270] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:45.394 [2024-12-16 11:30:50.670356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:45.394 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:47.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.798 rmmod nvme_tcp 00:10:56.798 rmmod nvme_fabrics 00:10:56.798 rmmod nvme_keyring 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 74827 ']' 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 74827 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 74827 ']' 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 74827 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74827 00:10:56.798 killing process with pid 74827 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74827' 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 74827 00:10:56.798 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 74827 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:57.056 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:10:57.314 00:10:57.314 real 0m13.100s 00:10:57.314 user 0m47.004s 00:10:57.314 sys 0m1.590s 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.314 ************************************ 00:10:57.314 END TEST nvmf_connect_disconnect 00:10:57.314 ************************************ 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.314 ************************************ 00:10:57.314 START TEST nvmf_multitarget 00:10:57.314 ************************************ 00:10:57.314 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:57.572 * Looking for test storage... 00:10:57.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:57.572 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.573 --rc genhtml_branch_coverage=1 00:10:57.573 --rc genhtml_function_coverage=1 00:10:57.573 --rc genhtml_legend=1 00:10:57.573 --rc geninfo_all_blocks=1 00:10:57.573 --rc geninfo_unexecuted_blocks=1 00:10:57.573 00:10:57.573 ' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.573 --rc genhtml_branch_coverage=1 00:10:57.573 --rc genhtml_function_coverage=1 00:10:57.573 --rc genhtml_legend=1 00:10:57.573 --rc geninfo_all_blocks=1 00:10:57.573 --rc geninfo_unexecuted_blocks=1 00:10:57.573 00:10:57.573 ' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.573 --rc genhtml_branch_coverage=1 00:10:57.573 --rc genhtml_function_coverage=1 00:10:57.573 --rc genhtml_legend=1 00:10:57.573 --rc geninfo_all_blocks=1 00:10:57.573 --rc geninfo_unexecuted_blocks=1 00:10:57.573 00:10:57.573 ' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.573 --rc genhtml_branch_coverage=1 00:10:57.573 --rc genhtml_function_coverage=1 00:10:57.573 --rc genhtml_legend=1 00:10:57.573 --rc geninfo_all_blocks=1 00:10:57.573 --rc geninfo_unexecuted_blocks=1 00:10:57.573 00:10:57.573 ' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.573 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:57.574 Cannot find device "nvmf_init_br" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:57.574 Cannot find device "nvmf_init_br2" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:57.574 Cannot find device "nvmf_tgt_br" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.574 Cannot find device "nvmf_tgt_br2" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:57.574 Cannot find device "nvmf_init_br" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:57.574 Cannot find device "nvmf_init_br2" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:57.574 Cannot find device "nvmf_tgt_br" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:57.574 Cannot find device "nvmf_tgt_br2" 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:10:57.574 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:57.832 Cannot find device "nvmf_br" 00:10:57.832 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:10:57.832 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:57.832 Cannot find device "nvmf_init_if" 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:57.832 Cannot find device "nvmf_init_if2" 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:57.832 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:57.833 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:58.091 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.091 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:10:58.091 00:10:58.091 --- 10.0.0.3 ping statistics --- 00:10:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.091 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:58.091 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:58.091 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:58.091 00:10:58.091 --- 10.0.0.4 ping statistics --- 00:10:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.091 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:58.091 00:10:58.091 --- 10.0.0.1 ping statistics --- 00:10:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.091 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:58.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:10:58.091 00:10:58.091 --- 10.0.0.2 ping statistics --- 00:10:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.091 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=75269 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 75269 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 75269 ']' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.091 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:58.091 [2024-12-16 11:31:03.351364] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:10:58.091 [2024-12-16 11:31:03.352173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.349 [2024-12-16 11:31:03.507526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.349 [2024-12-16 11:31:03.568643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.349 [2024-12-16 11:31:03.568700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.349 [2024-12-16 11:31:03.568718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.349 [2024-12-16 11:31:03.568733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.349 [2024-12-16 11:31:03.568743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.349 [2024-12-16 11:31:03.569973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.349 [2024-12-16 11:31:03.570069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.349 [2024-12-16 11:31:03.570135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.349 [2024-12-16 11:31:03.570135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:59.355 "nvmf_tgt_1" 00:10:59.355 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:59.622 "nvmf_tgt_2" 00:10:59.622 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:59.622 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:59.622 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:59.622 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:59.880 true 00:10:59.880 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:59.880 true 00:10:59.880 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:59.880 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:00.138 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:00.138 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:00.138 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:00.138 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.138 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:00.138 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.139 rmmod nvme_tcp 00:11:00.139 rmmod nvme_fabrics 00:11:00.139 rmmod nvme_keyring 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 75269 ']' 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 75269 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 75269 ']' 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 75269 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75269 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.139 killing process with pid 75269 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75269' 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 75269 00:11:00.139 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 75269 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.397 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:11:00.656 00:11:00.656 real 0m3.309s 00:11:00.656 user 0m10.101s 00:11:00.656 sys 0m0.783s 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.656 ************************************ 00:11:00.656 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:00.656 END TEST nvmf_multitarget 00:11:00.656 ************************************ 00:11:00.656 11:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:00.656 11:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.656 11:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.656 11:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.656 ************************************ 00:11:00.656 START TEST nvmf_rpc 00:11:00.656 ************************************ 00:11:00.656 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:00.915 * Looking for test storage... 00:11:00.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.915 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.915 --rc genhtml_branch_coverage=1 00:11:00.916 --rc genhtml_function_coverage=1 00:11:00.916 --rc genhtml_legend=1 00:11:00.916 --rc geninfo_all_blocks=1 00:11:00.916 --rc geninfo_unexecuted_blocks=1 00:11:00.916 00:11:00.916 ' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.916 --rc genhtml_branch_coverage=1 00:11:00.916 --rc genhtml_function_coverage=1 00:11:00.916 --rc genhtml_legend=1 00:11:00.916 --rc geninfo_all_blocks=1 00:11:00.916 --rc geninfo_unexecuted_blocks=1 00:11:00.916 00:11:00.916 ' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.916 --rc genhtml_branch_coverage=1 00:11:00.916 --rc genhtml_function_coverage=1 00:11:00.916 --rc genhtml_legend=1 00:11:00.916 --rc geninfo_all_blocks=1 00:11:00.916 --rc geninfo_unexecuted_blocks=1 00:11:00.916 00:11:00.916 ' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.916 --rc genhtml_branch_coverage=1 00:11:00.916 --rc genhtml_function_coverage=1 00:11:00.916 --rc genhtml_legend=1 00:11:00.916 --rc geninfo_all_blocks=1 00:11:00.916 --rc geninfo_unexecuted_blocks=1 00:11:00.916 00:11:00.916 ' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.916 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:00.916 Cannot find device "nvmf_init_br" 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:00.916 Cannot find device "nvmf_init_br2" 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:00.916 Cannot find device "nvmf_tgt_br" 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.916 Cannot find device "nvmf_tgt_br2" 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:00.916 Cannot find device "nvmf_init_br" 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:11:00.916 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:00.916 Cannot find device "nvmf_init_br2" 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:00.917 Cannot find device "nvmf_tgt_br" 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:00.917 Cannot find device "nvmf_tgt_br2" 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:00.917 Cannot find device "nvmf_br" 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:11:00.917 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:01.175 Cannot find device "nvmf_init_if" 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:01.175 Cannot find device "nvmf_init_if2" 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.175 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.176 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.176 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.176 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.176 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:11:01.176 00:11:01.176 --- 10.0.0.3 ping statistics --- 00:11:01.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.176 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:01.176 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.176 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.176 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:11:01.176 00:11:01.176 --- 10.0.0.4 ping statistics --- 00:11:01.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.176 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:01.176 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:01.176 00:11:01.176 --- 10.0.0.1 ping statistics --- 00:11:01.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.176 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:01.176 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:01.434 00:11:01.434 --- 10.0.0.2 ping statistics --- 00:11:01.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.434 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=75550 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 75550 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 75550 ']' 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.434 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.434 [2024-12-16 11:31:06.686961] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:11:01.434 [2024-12-16 11:31:06.687085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.434 [2024-12-16 11:31:06.836894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.693 [2024-12-16 11:31:06.896459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.693 [2024-12-16 11:31:06.896519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.693 [2024-12-16 11:31:06.896531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.693 [2024-12-16 11:31:06.896539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.693 [2024-12-16 11:31:06.896547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.693 [2024-12-16 11:31:06.897710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.693 [2024-12-16 11:31:06.898103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.693 [2024-12-16 11:31:06.898950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.693 [2024-12-16 11:31:06.899003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:01.693 "poll_groups": [ 00:11:01.693 { 00:11:01.693 "admin_qpairs": 0, 00:11:01.693 "completed_nvme_io": 0, 00:11:01.693 "current_admin_qpairs": 0, 00:11:01.693 "current_io_qpairs": 0, 00:11:01.693 "io_qpairs": 0, 00:11:01.693 "name": "nvmf_tgt_poll_group_000", 00:11:01.693 "pending_bdev_io": 0, 00:11:01.693 "transports": [] 00:11:01.693 }, 00:11:01.693 { 00:11:01.693 "admin_qpairs": 0, 00:11:01.693 "completed_nvme_io": 0, 00:11:01.693 "current_admin_qpairs": 0, 00:11:01.693 "current_io_qpairs": 0, 00:11:01.693 "io_qpairs": 0, 00:11:01.693 "name": "nvmf_tgt_poll_group_001", 00:11:01.693 "pending_bdev_io": 0, 00:11:01.693 "transports": [] 00:11:01.693 }, 00:11:01.693 { 00:11:01.693 "admin_qpairs": 0, 00:11:01.693 "completed_nvme_io": 0, 00:11:01.693 "current_admin_qpairs": 0, 00:11:01.693 "current_io_qpairs": 0, 00:11:01.693 "io_qpairs": 0, 00:11:01.693 "name": "nvmf_tgt_poll_group_002", 00:11:01.693 "pending_bdev_io": 0, 00:11:01.693 "transports": [] 00:11:01.693 }, 00:11:01.693 { 00:11:01.693 "admin_qpairs": 0, 00:11:01.693 "completed_nvme_io": 0, 00:11:01.693 "current_admin_qpairs": 0, 00:11:01.693 "current_io_qpairs": 0, 00:11:01.693 "io_qpairs": 0, 00:11:01.693 "name": "nvmf_tgt_poll_group_003", 00:11:01.693 "pending_bdev_io": 0, 00:11:01.693 "transports": [] 00:11:01.693 } 00:11:01.693 ], 00:11:01.693 "tick_rate": 2200000000 00:11:01.693 }' 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:01.693 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.952 [2024-12-16 11:31:07.208455] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:01.952 "poll_groups": [ 00:11:01.952 { 00:11:01.952 "admin_qpairs": 0, 00:11:01.952 "completed_nvme_io": 0, 00:11:01.952 "current_admin_qpairs": 0, 00:11:01.952 "current_io_qpairs": 0, 00:11:01.952 "io_qpairs": 0, 00:11:01.952 "name": "nvmf_tgt_poll_group_000", 00:11:01.952 "pending_bdev_io": 0, 00:11:01.952 "transports": [ 00:11:01.952 { 00:11:01.952 "trtype": "TCP" 00:11:01.952 } 00:11:01.952 ] 00:11:01.952 }, 00:11:01.952 { 00:11:01.952 "admin_qpairs": 0, 00:11:01.952 "completed_nvme_io": 0, 00:11:01.952 "current_admin_qpairs": 0, 00:11:01.952 "current_io_qpairs": 0, 00:11:01.952 "io_qpairs": 0, 00:11:01.952 "name": "nvmf_tgt_poll_group_001", 00:11:01.952 "pending_bdev_io": 0, 00:11:01.952 "transports": [ 00:11:01.952 { 00:11:01.952 "trtype": "TCP" 00:11:01.952 } 00:11:01.952 ] 00:11:01.952 }, 00:11:01.952 { 00:11:01.952 "admin_qpairs": 0, 00:11:01.952 "completed_nvme_io": 0, 00:11:01.952 "current_admin_qpairs": 0, 00:11:01.952 "current_io_qpairs": 0, 00:11:01.952 "io_qpairs": 0, 00:11:01.952 "name": "nvmf_tgt_poll_group_002", 00:11:01.952 "pending_bdev_io": 0, 00:11:01.952 "transports": [ 00:11:01.952 { 00:11:01.952 "trtype": "TCP" 00:11:01.952 } 00:11:01.952 ] 00:11:01.952 }, 00:11:01.952 { 00:11:01.952 "admin_qpairs": 0, 00:11:01.952 "completed_nvme_io": 0, 00:11:01.952 "current_admin_qpairs": 0, 00:11:01.952 "current_io_qpairs": 0, 00:11:01.952 "io_qpairs": 0, 00:11:01.952 "name": "nvmf_tgt_poll_group_003", 00:11:01.952 "pending_bdev_io": 0, 00:11:01.952 "transports": [ 00:11:01.952 { 00:11:01.952 "trtype": "TCP" 00:11:01.952 } 00:11:01.952 ] 00:11:01.952 } 00:11:01.952 ], 00:11:01.952 "tick_rate": 2200000000 00:11:01.952 }' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.952 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 Malloc1 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 [2024-12-16 11:31:07.411561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -a 10.0.0.3 -s 4420 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -a 10.0.0.3 -s 4420 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -a 10.0.0.3 -s 4420 00:11:02.211 [2024-12-16 11:31:07.440400] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052' 00:11:02.211 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:02.211 could not add new controller: failed to write to nvme-fabrics device 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.211 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:04.765 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:04.766 [2024-12-16 11:31:09.741001] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052' 00:11:04.766 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:04.766 could not add new controller: failed to write to nvme-fabrics device 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.766 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.664 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.664 [2024-12-16 11:31:12.037971] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.664 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:06.922 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.922 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.922 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.922 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.922 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:08.823 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:08.823 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:08.823 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.081 [2024-12-16 11:31:14.348930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.081 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:09.339 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.339 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:09.339 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.339 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:09.339 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:11.261 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:11.261 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.261 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:11.261 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:11.261 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.262 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:11.262 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.519 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 [2024-12-16 11:31:16.756291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:11.777 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.777 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.777 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.777 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:11.777 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.675 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.675 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.675 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.675 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:13.675 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.675 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:13.675 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.675 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.675 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.675 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.675 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.675 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.675 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.675 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 [2024-12-16 11:31:19.064097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.676 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:13.933 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.933 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.933 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.933 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.933 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.464 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 [2024-12-16 11:31:21.375350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:16.465 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.388 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 [2024-12-16 11:31:23.698435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 [2024-12-16 11:31:23.762504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.389 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.647 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 [2024-12-16 11:31:23.810533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 [2024-12-16 11:31:23.858582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 [2024-12-16 11:31:23.906651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:18.648 "poll_groups": [ 00:11:18.648 { 00:11:18.648 "admin_qpairs": 2, 00:11:18.648 "completed_nvme_io": 165, 00:11:18.648 "current_admin_qpairs": 0, 00:11:18.648 "current_io_qpairs": 0, 00:11:18.648 "io_qpairs": 16, 00:11:18.648 "name": "nvmf_tgt_poll_group_000", 00:11:18.648 "pending_bdev_io": 0, 00:11:18.648 "transports": [ 00:11:18.648 { 00:11:18.648 "trtype": "TCP" 00:11:18.648 } 00:11:18.648 ] 00:11:18.648 }, 00:11:18.648 { 00:11:18.648 "admin_qpairs": 3, 00:11:18.648 "completed_nvme_io": 165, 00:11:18.648 "current_admin_qpairs": 0, 00:11:18.648 "current_io_qpairs": 0, 00:11:18.648 "io_qpairs": 17, 00:11:18.648 "name": "nvmf_tgt_poll_group_001", 00:11:18.648 "pending_bdev_io": 0, 00:11:18.648 "transports": [ 00:11:18.648 { 00:11:18.648 "trtype": "TCP" 00:11:18.648 } 00:11:18.648 ] 00:11:18.648 }, 00:11:18.648 { 00:11:18.648 "admin_qpairs": 1, 00:11:18.648 "completed_nvme_io": 71, 00:11:18.648 "current_admin_qpairs": 0, 00:11:18.648 "current_io_qpairs": 0, 00:11:18.648 "io_qpairs": 19, 00:11:18.648 "name": "nvmf_tgt_poll_group_002", 00:11:18.648 "pending_bdev_io": 0, 00:11:18.648 "transports": [ 00:11:18.648 { 00:11:18.648 "trtype": "TCP" 00:11:18.648 } 00:11:18.648 ] 00:11:18.648 }, 00:11:18.648 { 00:11:18.648 "admin_qpairs": 1, 00:11:18.648 "completed_nvme_io": 19, 00:11:18.648 "current_admin_qpairs": 0, 00:11:18.648 "current_io_qpairs": 0, 00:11:18.648 "io_qpairs": 18, 00:11:18.648 "name": "nvmf_tgt_poll_group_003", 00:11:18.648 "pending_bdev_io": 0, 00:11:18.648 "transports": [ 00:11:18.648 { 00:11:18.648 "trtype": "TCP" 00:11:18.648 } 00:11:18.648 ] 00:11:18.648 } 00:11:18.648 ], 00:11:18.648 "tick_rate": 2200000000 00:11:18.648 }' 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:18.648 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:18.648 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:18.648 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:18.648 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:18.648 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:18.648 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.907 rmmod nvme_tcp 00:11:18.907 rmmod nvme_fabrics 00:11:18.907 rmmod nvme_keyring 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 75550 ']' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 75550 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 75550 ']' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 75550 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75550 00:11:18.907 killing process with pid 75550 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75550' 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 75550 00:11:18.907 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 75550 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:19.165 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:11:19.423 00:11:19.423 real 0m18.649s 00:11:19.423 user 1m8.920s 00:11:19.423 sys 0m2.575s 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.423 ************************************ 00:11:19.423 END TEST nvmf_rpc 00:11:19.423 ************************************ 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.423 ************************************ 00:11:19.423 START TEST nvmf_invalid 00:11:19.423 ************************************ 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:19.423 * Looking for test storage... 00:11:19.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.423 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.682 --rc genhtml_branch_coverage=1 00:11:19.682 --rc genhtml_function_coverage=1 00:11:19.682 --rc genhtml_legend=1 00:11:19.682 --rc geninfo_all_blocks=1 00:11:19.682 --rc geninfo_unexecuted_blocks=1 00:11:19.682 00:11:19.682 ' 00:11:19.682 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.682 --rc genhtml_branch_coverage=1 00:11:19.682 --rc genhtml_function_coverage=1 00:11:19.682 --rc genhtml_legend=1 00:11:19.682 --rc geninfo_all_blocks=1 00:11:19.682 --rc geninfo_unexecuted_blocks=1 00:11:19.682 00:11:19.682 ' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.683 --rc genhtml_branch_coverage=1 00:11:19.683 --rc genhtml_function_coverage=1 00:11:19.683 --rc genhtml_legend=1 00:11:19.683 --rc geninfo_all_blocks=1 00:11:19.683 --rc geninfo_unexecuted_blocks=1 00:11:19.683 00:11:19.683 ' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.683 --rc genhtml_branch_coverage=1 00:11:19.683 --rc genhtml_function_coverage=1 00:11:19.683 --rc genhtml_legend=1 00:11:19.683 --rc geninfo_all_blocks=1 00:11:19.683 --rc geninfo_unexecuted_blocks=1 00:11:19.683 00:11:19.683 ' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.683 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:19.683 Cannot find device "nvmf_init_br" 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:19.683 Cannot find device "nvmf_init_br2" 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:19.683 Cannot find device "nvmf_tgt_br" 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.683 Cannot find device "nvmf_tgt_br2" 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:11:19.683 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:19.683 Cannot find device "nvmf_init_br" 00:11:19.683 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:11:19.683 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:19.683 Cannot find device "nvmf_init_br2" 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:19.684 Cannot find device "nvmf_tgt_br" 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:19.684 Cannot find device "nvmf_tgt_br2" 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:19.684 Cannot find device "nvmf_br" 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:19.684 Cannot find device "nvmf_init_if" 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:19.684 Cannot find device "nvmf_init_if2" 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:19.684 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:19.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:19.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:11:19.942 00:11:19.942 --- 10.0.0.3 ping statistics --- 00:11:19.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.942 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:19.942 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:19.942 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:11:19.942 00:11:19.942 --- 10.0.0.4 ping statistics --- 00:11:19.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.942 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:19.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:19.942 00:11:19.942 --- 10.0.0.1 ping statistics --- 00:11:19.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.942 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:19.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:19.942 00:11:19.942 --- 10.0.0.2 ping statistics --- 00:11:19.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.942 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.942 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=76103 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 76103 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 76103 ']' 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.200 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:20.200 [2024-12-16 11:31:25.421224] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:11:20.200 [2024-12-16 11:31:25.421334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.200 [2024-12-16 11:31:25.569255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.458 [2024-12-16 11:31:25.631103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.458 [2024-12-16 11:31:25.631404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.458 [2024-12-16 11:31:25.631425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.458 [2024-12-16 11:31:25.631435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.458 [2024-12-16 11:31:25.631443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.458 [2024-12-16 11:31:25.632611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.458 [2024-12-16 11:31:25.632659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.458 [2024-12-16 11:31:25.632726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.458 [2024-12-16 11:31:25.632730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.025 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.025 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:21.025 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.025 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.025 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.283 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.283 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:21.283 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31315 00:11:21.542 [2024-12-16 11:31:26.797125] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:21.542 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/16 11:31:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31315 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:21.542 request: 00:11:21.542 { 00:11:21.542 "method": "nvmf_create_subsystem", 00:11:21.542 "params": { 00:11:21.542 "nqn": "nqn.2016-06.io.spdk:cnode31315", 00:11:21.542 "tgt_name": "foobar" 00:11:21.542 } 00:11:21.542 } 00:11:21.542 Got JSON-RPC error response 00:11:21.542 GoRPCClient: error on JSON-RPC call' 00:11:21.542 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/16 11:31:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31315 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:21.542 request: 00:11:21.542 { 00:11:21.542 "method": "nvmf_create_subsystem", 00:11:21.542 "params": { 00:11:21.542 "nqn": "nqn.2016-06.io.spdk:cnode31315", 00:11:21.542 "tgt_name": "foobar" 00:11:21.542 } 00:11:21.542 } 00:11:21.542 Got JSON-RPC error response 00:11:21.542 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:21.542 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:21.542 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25413 00:11:21.801 [2024-12-16 11:31:27.129462] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25413: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:21.801 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/16 11:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25413 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:21.801 request: 00:11:21.801 { 00:11:21.801 "method": "nvmf_create_subsystem", 00:11:21.801 "params": { 00:11:21.801 "nqn": "nqn.2016-06.io.spdk:cnode25413", 00:11:21.801 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:21.801 } 00:11:21.801 } 00:11:21.801 Got JSON-RPC error response 00:11:21.801 GoRPCClient: error on JSON-RPC call' 00:11:21.801 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/16 11:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25413 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:21.801 request: 00:11:21.801 { 00:11:21.801 "method": "nvmf_create_subsystem", 00:11:21.801 "params": { 00:11:21.801 "nqn": "nqn.2016-06.io.spdk:cnode25413", 00:11:21.801 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:21.801 } 00:11:21.801 } 00:11:21.801 Got JSON-RPC error response 00:11:21.801 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:21.801 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:21.801 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29463 00:11:22.060 [2024-12-16 11:31:27.449728] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29463: invalid model number 'SPDK_Controller' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/16 11:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29463], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:22.320 request: 00:11:22.320 { 00:11:22.320 "method": "nvmf_create_subsystem", 00:11:22.320 "params": { 00:11:22.320 "nqn": "nqn.2016-06.io.spdk:cnode29463", 00:11:22.320 "model_number": "SPDK_Controller\u001f" 00:11:22.320 } 00:11:22.320 } 00:11:22.320 Got JSON-RPC error response 00:11:22.320 GoRPCClient: error on JSON-RPC call' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/16 11:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29463], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:22.320 request: 00:11:22.320 { 00:11:22.320 "method": "nvmf_create_subsystem", 00:11:22.320 "params": { 00:11:22.320 "nqn": "nqn.2016-06.io.spdk:cnode29463", 00:11:22.320 "model_number": "SPDK_Controller\u001f" 00:11:22.320 } 00:11:22.320 } 00:11:22.320 Got JSON-RPC error response 00:11:22.320 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:22.320 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'sg ;6LyvG326/I,0O|rV%' 00:11:22.321 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'sg ;6LyvG326/I,0O|rV%' nqn.2016-06.io.spdk:cnode12171 00:11:22.580 [2024-12-16 11:31:27.890088] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12171: invalid serial number 'sg ;6LyvG326/I,0O|rV%' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/16 11:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12171 serial_number:sg ;6LyvG326/I,0O|rV%], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN sg ;6LyvG326/I,0O|rV% 00:11:22.580 request: 00:11:22.580 { 00:11:22.580 "method": "nvmf_create_subsystem", 00:11:22.580 "params": { 00:11:22.580 "nqn": "nqn.2016-06.io.spdk:cnode12171", 00:11:22.580 "serial_number": "sg ;6LyvG326/I,0O|rV%" 00:11:22.580 } 00:11:22.580 } 00:11:22.580 Got JSON-RPC error response 00:11:22.580 GoRPCClient: error on JSON-RPC call' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/16 11:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12171 serial_number:sg ;6LyvG326/I,0O|rV%], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN sg ;6LyvG326/I,0O|rV% 00:11:22.580 request: 00:11:22.580 { 00:11:22.580 "method": "nvmf_create_subsystem", 00:11:22.580 "params": { 00:11:22.580 "nqn": "nqn.2016-06.io.spdk:cnode12171", 00:11:22.580 "serial_number": "sg ;6LyvG326/I,0O|rV%" 00:11:22.580 } 00:11:22.580 } 00:11:22.580 Got JSON-RPC error response 00:11:22.580 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:22.580 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.581 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.840 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:22.840 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:22.840 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:22.840 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.840 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.840 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:22.840 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Bx3rmbm)z4}Y[*Wy9prz iMWu yVh#Cb+C8k]4G' 00:11:22.841 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Bx3rmbm)z4}Y[*Wy9prz iMWu yVh#Cb+C8k]4G' nqn.2016-06.io.spdk:cnode12279 00:11:23.101 [2024-12-16 11:31:28.418587] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12279: invalid model number 'Bx3rmbm)z4}Y[*Wy9prz iMWu yVh#Cb+C8k]4G' 00:11:23.101 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/16 11:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Bx3rmbm)z4}Y[*Wy9prz iMWu yVh#Cb+C8k]4G nqn:nqn.2016-06.io.spdk:cnode12279], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Bx3rmbm)z4}Y[*Wy9prz iMWu yVh#Cb+C8k]4G 00:11:23.101 request: 00:11:23.101 { 00:11:23.101 "method": "nvmf_create_subsystem", 00:11:23.101 "params": { 00:11:23.101 "nqn": "nqn.2016-06.io.spdk:cnode12279", 00:11:23.101 "model_number": "Bx3rmbm)z4}Y\u007f[*Wy9prz i\u007fMWu yVh#Cb+C8k]4G" 00:11:23.101 } 00:11:23.101 } 00:11:23.101 Got JSON-RPC error response 00:11:23.101 GoRPCClient: error on JSON-RPC call' 00:11:23.101 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/16 11:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Bx3rmbm)z4}Y[*Wy9prz iMWu yVh#Cb+C8k]4G nqn:nqn.2016-06.io.spdk:cnode12279], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Bx3rmbm)z4}Y[*Wy9prz iMWu yVh#Cb+C8k]4G 00:11:23.101 request: 00:11:23.101 { 00:11:23.101 "method": "nvmf_create_subsystem", 00:11:23.101 "params": { 00:11:23.101 "nqn": "nqn.2016-06.io.spdk:cnode12279", 00:11:23.101 "model_number": "Bx3rmbm)z4}Y\u007f[*Wy9prz i\u007fMWu yVh#Cb+C8k]4G" 00:11:23.101 } 00:11:23.101 } 00:11:23.101 Got JSON-RPC error response 00:11:23.101 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:23.101 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:23.359 [2024-12-16 11:31:28.722803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.359 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:23.624 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:23.624 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:23.624 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:23.624 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:23.624 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:24.191 [2024-12-16 11:31:29.344144] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:24.191 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/16 11:31:29 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:24.191 request: 00:11:24.191 { 00:11:24.191 "method": "nvmf_subsystem_remove_listener", 00:11:24.191 "params": { 00:11:24.191 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:24.191 "listen_address": { 00:11:24.191 "trtype": "tcp", 00:11:24.191 "traddr": "", 00:11:24.191 "trsvcid": "4421" 00:11:24.191 } 00:11:24.191 } 00:11:24.191 } 00:11:24.191 Got JSON-RPC error response 00:11:24.191 GoRPCClient: error on JSON-RPC call' 00:11:24.191 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/16 11:31:29 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:24.191 request: 00:11:24.191 { 00:11:24.191 "method": "nvmf_subsystem_remove_listener", 00:11:24.191 "params": { 00:11:24.191 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:24.191 "listen_address": { 00:11:24.191 "trtype": "tcp", 00:11:24.191 "traddr": "", 00:11:24.191 "trsvcid": "4421" 00:11:24.191 } 00:11:24.191 } 00:11:24.191 } 00:11:24.191 Got JSON-RPC error response 00:11:24.191 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:24.191 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8196 -i 0 00:11:24.450 [2024-12-16 11:31:29.664405] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8196: invalid cntlid range [0-65519] 00:11:24.450 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/16 11:31:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8196], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:24.450 request: 00:11:24.450 { 00:11:24.450 "method": "nvmf_create_subsystem", 00:11:24.450 "params": { 00:11:24.450 "nqn": "nqn.2016-06.io.spdk:cnode8196", 00:11:24.450 "min_cntlid": 0 00:11:24.450 } 00:11:24.450 } 00:11:24.450 Got JSON-RPC error response 00:11:24.450 GoRPCClient: error on JSON-RPC call' 00:11:24.450 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/16 11:31:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8196], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:24.450 request: 00:11:24.450 { 00:11:24.450 "method": "nvmf_create_subsystem", 00:11:24.450 "params": { 00:11:24.450 "nqn": "nqn.2016-06.io.spdk:cnode8196", 00:11:24.450 "min_cntlid": 0 00:11:24.450 } 00:11:24.450 } 00:11:24.450 Got JSON-RPC error response 00:11:24.450 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.450 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25945 -i 65520 00:11:24.709 [2024-12-16 11:31:29.925230] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25945: invalid cntlid range [65520-65519] 00:11:24.709 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/16 11:31:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode25945], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:24.709 request: 00:11:24.709 { 00:11:24.709 "method": "nvmf_create_subsystem", 00:11:24.709 "params": { 00:11:24.709 "nqn": "nqn.2016-06.io.spdk:cnode25945", 00:11:24.709 "min_cntlid": 65520 00:11:24.709 } 00:11:24.709 } 00:11:24.709 Got JSON-RPC error response 00:11:24.709 GoRPCClient: error on JSON-RPC call' 00:11:24.709 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/16 11:31:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode25945], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:24.709 request: 00:11:24.709 { 00:11:24.709 "method": "nvmf_create_subsystem", 00:11:24.709 "params": { 00:11:24.709 "nqn": "nqn.2016-06.io.spdk:cnode25945", 00:11:24.709 "min_cntlid": 65520 00:11:24.709 } 00:11:24.709 } 00:11:24.709 Got JSON-RPC error response 00:11:24.709 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.709 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31912 -I 0 00:11:24.968 [2024-12-16 11:31:30.177447] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31912: invalid cntlid range [1-0] 00:11:24.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/16 11:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31912], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:24.968 request: 00:11:24.968 { 00:11:24.968 "method": "nvmf_create_subsystem", 00:11:24.968 "params": { 00:11:24.968 "nqn": "nqn.2016-06.io.spdk:cnode31912", 00:11:24.968 "max_cntlid": 0 00:11:24.968 } 00:11:24.968 } 00:11:24.968 Got JSON-RPC error response 00:11:24.968 GoRPCClient: error on JSON-RPC call' 00:11:24.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/16 11:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31912], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:24.968 request: 00:11:24.968 { 00:11:24.968 "method": "nvmf_create_subsystem", 00:11:24.968 "params": { 00:11:24.968 "nqn": "nqn.2016-06.io.spdk:cnode31912", 00:11:24.968 "max_cntlid": 0 00:11:24.968 } 00:11:24.968 } 00:11:24.968 Got JSON-RPC error response 00:11:24.968 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3939 -I 65520 00:11:25.226 [2024-12-16 11:31:30.489708] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3939: invalid cntlid range [1-65520] 00:11:25.226 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/16 11:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3939], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:25.226 request: 00:11:25.226 { 00:11:25.226 "method": "nvmf_create_subsystem", 00:11:25.226 "params": { 00:11:25.226 "nqn": "nqn.2016-06.io.spdk:cnode3939", 00:11:25.226 "max_cntlid": 65520 00:11:25.226 } 00:11:25.226 } 00:11:25.226 Got JSON-RPC error response 00:11:25.226 GoRPCClient: error on JSON-RPC call' 00:11:25.226 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/16 11:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3939], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:25.226 request: 00:11:25.226 { 00:11:25.226 "method": "nvmf_create_subsystem", 00:11:25.226 "params": { 00:11:25.226 "nqn": "nqn.2016-06.io.spdk:cnode3939", 00:11:25.226 "max_cntlid": 65520 00:11:25.226 } 00:11:25.226 } 00:11:25.226 Got JSON-RPC error response 00:11:25.226 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:25.226 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31252 -i 6 -I 5 00:11:25.485 [2024-12-16 11:31:30.801975] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31252: invalid cntlid range [6-5] 00:11:25.485 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/16 11:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31252], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:25.485 request: 00:11:25.485 { 00:11:25.485 "method": "nvmf_create_subsystem", 00:11:25.485 "params": { 00:11:25.485 "nqn": "nqn.2016-06.io.spdk:cnode31252", 00:11:25.485 "min_cntlid": 6, 00:11:25.485 "max_cntlid": 5 00:11:25.485 } 00:11:25.485 } 00:11:25.485 Got JSON-RPC error response 00:11:25.485 GoRPCClient: error on JSON-RPC call' 00:11:25.485 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/16 11:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31252], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:25.485 request: 00:11:25.485 { 00:11:25.485 "method": "nvmf_create_subsystem", 00:11:25.485 "params": { 00:11:25.485 "nqn": "nqn.2016-06.io.spdk:cnode31252", 00:11:25.485 "min_cntlid": 6, 00:11:25.485 "max_cntlid": 5 00:11:25.485 } 00:11:25.485 } 00:11:25.485 Got JSON-RPC error response 00:11:25.485 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:25.485 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:25.745 { 00:11:25.745 "name": "foobar", 00:11:25.745 "method": "nvmf_delete_target", 00:11:25.745 "req_id": 1 00:11:25.745 } 00:11:25.745 Got JSON-RPC error response 00:11:25.745 response: 00:11:25.745 { 00:11:25.745 "code": -32602, 00:11:25.745 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:25.745 }' 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:25.745 { 00:11:25.745 "name": "foobar", 00:11:25.745 "method": "nvmf_delete_target", 00:11:25.745 "req_id": 1 00:11:25.745 } 00:11:25.745 Got JSON-RPC error response 00:11:25.745 response: 00:11:25.745 { 00:11:25.745 "code": -32602, 00:11:25.745 "message": "The specified target doesn't exist, cannot delete it." 00:11:25.745 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.745 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.745 rmmod nvme_tcp 00:11:25.745 rmmod nvme_fabrics 00:11:25.745 rmmod nvme_keyring 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 76103 ']' 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 76103 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 76103 ']' 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 76103 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76103 00:11:25.745 killing process with pid 76103 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76103' 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 76103 00:11:25.745 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 76103 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:26.004 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:26.005 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:26.005 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:26.005 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:26.005 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.263 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.263 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:26.263 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:11:26.264 00:11:26.264 real 0m6.766s 00:11:26.264 user 0m26.387s 00:11:26.264 sys 0m1.458s 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.264 ************************************ 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:26.264 END TEST nvmf_invalid 00:11:26.264 ************************************ 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.264 ************************************ 00:11:26.264 START TEST nvmf_connect_stress 00:11:26.264 ************************************ 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:26.264 * Looking for test storage... 00:11:26.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:26.264 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:26.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.524 --rc genhtml_branch_coverage=1 00:11:26.524 --rc genhtml_function_coverage=1 00:11:26.524 --rc genhtml_legend=1 00:11:26.524 --rc geninfo_all_blocks=1 00:11:26.524 --rc geninfo_unexecuted_blocks=1 00:11:26.524 00:11:26.524 ' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:26.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.524 --rc genhtml_branch_coverage=1 00:11:26.524 --rc genhtml_function_coverage=1 00:11:26.524 --rc genhtml_legend=1 00:11:26.524 --rc geninfo_all_blocks=1 00:11:26.524 --rc geninfo_unexecuted_blocks=1 00:11:26.524 00:11:26.524 ' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:26.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.524 --rc genhtml_branch_coverage=1 00:11:26.524 --rc genhtml_function_coverage=1 00:11:26.524 --rc genhtml_legend=1 00:11:26.524 --rc geninfo_all_blocks=1 00:11:26.524 --rc geninfo_unexecuted_blocks=1 00:11:26.524 00:11:26.524 ' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:26.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.524 --rc genhtml_branch_coverage=1 00:11:26.524 --rc genhtml_function_coverage=1 00:11:26.524 --rc genhtml_legend=1 00:11:26.524 --rc geninfo_all_blocks=1 00:11:26.524 --rc geninfo_unexecuted_blocks=1 00:11:26.524 00:11:26.524 ' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:26.524 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.525 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:26.525 Cannot find device "nvmf_init_br" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:26.525 Cannot find device "nvmf_init_br2" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:26.525 Cannot find device "nvmf_tgt_br" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.525 Cannot find device "nvmf_tgt_br2" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:26.525 Cannot find device "nvmf_init_br" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:26.525 Cannot find device "nvmf_init_br2" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:26.525 Cannot find device "nvmf_tgt_br" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:26.525 Cannot find device "nvmf_tgt_br2" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:26.525 Cannot find device "nvmf_br" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:26.525 Cannot find device "nvmf_init_if" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:26.525 Cannot find device "nvmf_init_if2" 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:26.525 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:26.785 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:26.785 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:26.785 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:26.785 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:26.785 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:26.785 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:26.785 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:26.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:26.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:11:26.785 00:11:26.785 --- 10.0.0.3 ping statistics --- 00:11:26.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.785 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:26.785 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:26.785 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:11:26.785 00:11:26.785 --- 10.0.0.4 ping statistics --- 00:11:26.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.785 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:26.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:26.785 00:11:26.785 --- 10.0.0.1 ping statistics --- 00:11:26.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.785 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:26.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:11:26.785 00:11:26.785 --- 10.0.0.2 ping statistics --- 00:11:26.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.785 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=76663 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 76663 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 76663 ']' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.785 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.044 [2024-12-16 11:31:32.221816] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:11:27.044 [2024-12-16 11:31:32.221924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.044 [2024-12-16 11:31:32.366236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.044 [2024-12-16 11:31:32.425685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.044 [2024-12-16 11:31:32.425748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.044 [2024-12-16 11:31:32.425758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.044 [2024-12-16 11:31:32.425767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.044 [2024-12-16 11:31:32.425773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.044 [2024-12-16 11:31:32.428098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.044 [2024-12-16 11:31:32.428150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.044 [2024-12-16 11:31:32.428158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.980 [2024-12-16 11:31:33.282986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.980 [2024-12-16 11:31:33.304689] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.980 NULL1 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=76715 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.980 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:27.981 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.239 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.239 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.239 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:28.239 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.239 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.239 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.497 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.497 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:28.497 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.497 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.497 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.756 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.756 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:28.756 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.756 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.756 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.015 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:29.015 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.015 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.015 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.583 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.583 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:29.583 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.583 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.583 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.841 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.841 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:29.841 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.841 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.841 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.099 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.099 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:30.099 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.099 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.099 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.357 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.357 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:30.357 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.357 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.357 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:30.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.616 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.183 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.183 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:31.183 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.183 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.183 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.442 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.442 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:31.442 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.442 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.442 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.700 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.700 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:31.700 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.700 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.700 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.959 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.959 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:31.959 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.959 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.959 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.268 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.268 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:32.268 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.268 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.268 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.525 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.525 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:32.525 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.525 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.525 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.092 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.092 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:33.092 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.092 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.092 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.351 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.351 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:33.351 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.351 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.351 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.609 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.609 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:33.609 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.609 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.609 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.868 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.868 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:33.868 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.868 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.868 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.434 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.434 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:34.434 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.434 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.434 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.693 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.693 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:34.693 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.693 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.693 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.952 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.952 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:34.952 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.952 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.952 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:35.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:35.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.035 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.035 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:36.035 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.035 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.035 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.293 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.293 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:36.293 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.293 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.293 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.551 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.552 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:36.552 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.552 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.552 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.810 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.810 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:36.810 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.810 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.810 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.068 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.068 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:37.068 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.068 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.068 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.635 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.635 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:37.635 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.635 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.635 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.894 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.894 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:37.894 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.894 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.894 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.152 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.152 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:38.152 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.152 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.152 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.152 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76715 00:11:38.411 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (76715) - No such process 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 76715 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.411 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.411 rmmod nvme_tcp 00:11:38.411 rmmod nvme_fabrics 00:11:38.411 rmmod nvme_keyring 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 76663 ']' 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 76663 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 76663 ']' 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 76663 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76663 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:38.670 killing process with pid 76663 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76663' 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 76663 00:11:38.670 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 76663 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.929 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.188 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:11:39.188 00:11:39.188 real 0m12.812s 00:11:39.188 user 0m41.559s 00:11:39.188 sys 0m3.494s 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.189 ************************************ 00:11:39.189 END TEST nvmf_connect_stress 00:11:39.189 ************************************ 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.189 ************************************ 00:11:39.189 START TEST nvmf_fused_ordering 00:11:39.189 ************************************ 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.189 * Looking for test storage... 00:11:39.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.189 --rc genhtml_branch_coverage=1 00:11:39.189 --rc genhtml_function_coverage=1 00:11:39.189 --rc genhtml_legend=1 00:11:39.189 --rc geninfo_all_blocks=1 00:11:39.189 --rc geninfo_unexecuted_blocks=1 00:11:39.189 00:11:39.189 ' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.189 --rc genhtml_branch_coverage=1 00:11:39.189 --rc genhtml_function_coverage=1 00:11:39.189 --rc genhtml_legend=1 00:11:39.189 --rc geninfo_all_blocks=1 00:11:39.189 --rc geninfo_unexecuted_blocks=1 00:11:39.189 00:11:39.189 ' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.189 --rc genhtml_branch_coverage=1 00:11:39.189 --rc genhtml_function_coverage=1 00:11:39.189 --rc genhtml_legend=1 00:11:39.189 --rc geninfo_all_blocks=1 00:11:39.189 --rc geninfo_unexecuted_blocks=1 00:11:39.189 00:11:39.189 ' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.189 --rc genhtml_branch_coverage=1 00:11:39.189 --rc genhtml_function_coverage=1 00:11:39.189 --rc genhtml_legend=1 00:11:39.189 --rc geninfo_all_blocks=1 00:11:39.189 --rc geninfo_unexecuted_blocks=1 00:11:39.189 00:11:39.189 ' 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.189 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.449 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:39.449 Cannot find device "nvmf_init_br" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:39.449 Cannot find device "nvmf_init_br2" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:39.449 Cannot find device "nvmf_tgt_br" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.449 Cannot find device "nvmf_tgt_br2" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:39.449 Cannot find device "nvmf_init_br" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:39.449 Cannot find device "nvmf_init_br2" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:39.449 Cannot find device "nvmf_tgt_br" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:39.449 Cannot find device "nvmf_tgt_br2" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:39.449 Cannot find device "nvmf_br" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:39.449 Cannot find device "nvmf_init_if" 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:11:39.449 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:39.450 Cannot find device "nvmf_init_if2" 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:39.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:39.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:39.450 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:39.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:39.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:11:39.709 00:11:39.709 --- 10.0.0.3 ping statistics --- 00:11:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.709 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:39.709 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:39.709 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:11:39.709 00:11:39.709 --- 10.0.0.4 ping statistics --- 00:11:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.709 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:39.709 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:39.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:39.709 00:11:39.709 --- 10.0.0.1 ping statistics --- 00:11:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.709 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:39.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:11:39.709 00:11:39.709 --- 10.0.0.2 ping statistics --- 00:11:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.709 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=77102 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 77102 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 77102 ']' 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.709 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:39.709 [2024-12-16 11:31:45.113333] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:11:39.709 [2024-12-16 11:31:45.113448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.968 [2024-12-16 11:31:45.269033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.968 [2024-12-16 11:31:45.331841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.968 [2024-12-16 11:31:45.331927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.968 [2024-12-16 11:31:45.331951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.968 [2024-12-16 11:31:45.331962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.968 [2024-12-16 11:31:45.331971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.968 [2024-12-16 11:31:45.332510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 [2024-12-16 11:31:45.529350] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 [2024-12-16 11:31:45.545510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 NULL1 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.227 [2024-12-16 11:31:45.600323] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:11:40.227 [2024-12-16 11:31:45.600374] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77133 ] 00:11:40.794 Attached to nqn.2016-06.io.spdk:cnode1 00:11:40.794 Namespace ID: 1 size: 1GB 00:11:40.794 fused_ordering(0) 00:11:40.794 fused_ordering(1) 00:11:40.794 fused_ordering(2) 00:11:40.794 fused_ordering(3) 00:11:40.794 fused_ordering(4) 00:11:40.794 fused_ordering(5) 00:11:40.794 fused_ordering(6) 00:11:40.794 fused_ordering(7) 00:11:40.794 fused_ordering(8) 00:11:40.794 fused_ordering(9) 00:11:40.794 fused_ordering(10) 00:11:40.794 fused_ordering(11) 00:11:40.794 fused_ordering(12) 00:11:40.794 fused_ordering(13) 00:11:40.794 fused_ordering(14) 00:11:40.794 fused_ordering(15) 00:11:40.794 fused_ordering(16) 00:11:40.794 fused_ordering(17) 00:11:40.794 fused_ordering(18) 00:11:40.794 fused_ordering(19) 00:11:40.794 fused_ordering(20) 00:11:40.794 fused_ordering(21) 00:11:40.794 fused_ordering(22) 00:11:40.794 fused_ordering(23) 00:11:40.794 fused_ordering(24) 00:11:40.794 fused_ordering(25) 00:11:40.794 fused_ordering(26) 00:11:40.794 fused_ordering(27) 00:11:40.794 fused_ordering(28) 00:11:40.794 fused_ordering(29) 00:11:40.794 fused_ordering(30) 00:11:40.794 fused_ordering(31) 00:11:40.794 fused_ordering(32) 00:11:40.794 fused_ordering(33) 00:11:40.794 fused_ordering(34) 00:11:40.794 fused_ordering(35) 00:11:40.794 fused_ordering(36) 00:11:40.794 fused_ordering(37) 00:11:40.794 fused_ordering(38) 00:11:40.794 fused_ordering(39) 00:11:40.794 fused_ordering(40) 00:11:40.794 fused_ordering(41) 00:11:40.794 fused_ordering(42) 00:11:40.794 fused_ordering(43) 00:11:40.794 fused_ordering(44) 00:11:40.794 fused_ordering(45) 00:11:40.794 fused_ordering(46) 00:11:40.794 fused_ordering(47) 00:11:40.794 fused_ordering(48) 00:11:40.794 fused_ordering(49) 00:11:40.794 fused_ordering(50) 00:11:40.794 fused_ordering(51) 00:11:40.794 fused_ordering(52) 00:11:40.794 fused_ordering(53) 00:11:40.794 fused_ordering(54) 00:11:40.794 fused_ordering(55) 00:11:40.794 fused_ordering(56) 00:11:40.794 fused_ordering(57) 00:11:40.794 fused_ordering(58) 00:11:40.794 fused_ordering(59) 00:11:40.794 fused_ordering(60) 00:11:40.794 fused_ordering(61) 00:11:40.794 fused_ordering(62) 00:11:40.794 fused_ordering(63) 00:11:40.794 fused_ordering(64) 00:11:40.794 fused_ordering(65) 00:11:40.794 fused_ordering(66) 00:11:40.794 fused_ordering(67) 00:11:40.794 fused_ordering(68) 00:11:40.794 fused_ordering(69) 00:11:40.794 fused_ordering(70) 00:11:40.794 fused_ordering(71) 00:11:40.794 fused_ordering(72) 00:11:40.794 fused_ordering(73) 00:11:40.794 fused_ordering(74) 00:11:40.794 fused_ordering(75) 00:11:40.794 fused_ordering(76) 00:11:40.794 fused_ordering(77) 00:11:40.794 fused_ordering(78) 00:11:40.794 fused_ordering(79) 00:11:40.794 fused_ordering(80) 00:11:40.794 fused_ordering(81) 00:11:40.794 fused_ordering(82) 00:11:40.794 fused_ordering(83) 00:11:40.794 fused_ordering(84) 00:11:40.794 fused_ordering(85) 00:11:40.794 fused_ordering(86) 00:11:40.794 fused_ordering(87) 00:11:40.794 fused_ordering(88) 00:11:40.794 fused_ordering(89) 00:11:40.794 fused_ordering(90) 00:11:40.794 fused_ordering(91) 00:11:40.794 fused_ordering(92) 00:11:40.794 fused_ordering(93) 00:11:40.794 fused_ordering(94) 00:11:40.794 fused_ordering(95) 00:11:40.794 fused_ordering(96) 00:11:40.794 fused_ordering(97) 00:11:40.794 fused_ordering(98) 00:11:40.794 fused_ordering(99) 00:11:40.794 fused_ordering(100) 00:11:40.794 fused_ordering(101) 00:11:40.794 fused_ordering(102) 00:11:40.794 fused_ordering(103) 00:11:40.794 fused_ordering(104) 00:11:40.794 fused_ordering(105) 00:11:40.794 fused_ordering(106) 00:11:40.794 fused_ordering(107) 00:11:40.794 fused_ordering(108) 00:11:40.794 fused_ordering(109) 00:11:40.794 fused_ordering(110) 00:11:40.794 fused_ordering(111) 00:11:40.794 fused_ordering(112) 00:11:40.794 fused_ordering(113) 00:11:40.794 fused_ordering(114) 00:11:40.794 fused_ordering(115) 00:11:40.794 fused_ordering(116) 00:11:40.794 fused_ordering(117) 00:11:40.794 fused_ordering(118) 00:11:40.794 fused_ordering(119) 00:11:40.794 fused_ordering(120) 00:11:40.794 fused_ordering(121) 00:11:40.794 fused_ordering(122) 00:11:40.794 fused_ordering(123) 00:11:40.794 fused_ordering(124) 00:11:40.794 fused_ordering(125) 00:11:40.794 fused_ordering(126) 00:11:40.794 fused_ordering(127) 00:11:40.794 fused_ordering(128) 00:11:40.794 fused_ordering(129) 00:11:40.794 fused_ordering(130) 00:11:40.794 fused_ordering(131) 00:11:40.794 fused_ordering(132) 00:11:40.794 fused_ordering(133) 00:11:40.794 fused_ordering(134) 00:11:40.794 fused_ordering(135) 00:11:40.794 fused_ordering(136) 00:11:40.794 fused_ordering(137) 00:11:40.794 fused_ordering(138) 00:11:40.794 fused_ordering(139) 00:11:40.794 fused_ordering(140) 00:11:40.794 fused_ordering(141) 00:11:40.794 fused_ordering(142) 00:11:40.794 fused_ordering(143) 00:11:40.794 fused_ordering(144) 00:11:40.794 fused_ordering(145) 00:11:40.794 fused_ordering(146) 00:11:40.794 fused_ordering(147) 00:11:40.794 fused_ordering(148) 00:11:40.794 fused_ordering(149) 00:11:40.794 fused_ordering(150) 00:11:40.794 fused_ordering(151) 00:11:40.794 fused_ordering(152) 00:11:40.794 fused_ordering(153) 00:11:40.794 fused_ordering(154) 00:11:40.794 fused_ordering(155) 00:11:40.794 fused_ordering(156) 00:11:40.794 fused_ordering(157) 00:11:40.794 fused_ordering(158) 00:11:40.794 fused_ordering(159) 00:11:40.794 fused_ordering(160) 00:11:40.794 fused_ordering(161) 00:11:40.794 fused_ordering(162) 00:11:40.794 fused_ordering(163) 00:11:40.794 fused_ordering(164) 00:11:40.794 fused_ordering(165) 00:11:40.794 fused_ordering(166) 00:11:40.794 fused_ordering(167) 00:11:40.794 fused_ordering(168) 00:11:40.794 fused_ordering(169) 00:11:40.794 fused_ordering(170) 00:11:40.794 fused_ordering(171) 00:11:40.794 fused_ordering(172) 00:11:40.794 fused_ordering(173) 00:11:40.794 fused_ordering(174) 00:11:40.794 fused_ordering(175) 00:11:40.794 fused_ordering(176) 00:11:40.794 fused_ordering(177) 00:11:40.794 fused_ordering(178) 00:11:40.794 fused_ordering(179) 00:11:40.794 fused_ordering(180) 00:11:40.794 fused_ordering(181) 00:11:40.794 fused_ordering(182) 00:11:40.794 fused_ordering(183) 00:11:40.794 fused_ordering(184) 00:11:40.794 fused_ordering(185) 00:11:40.794 fused_ordering(186) 00:11:40.794 fused_ordering(187) 00:11:40.794 fused_ordering(188) 00:11:40.794 fused_ordering(189) 00:11:40.794 fused_ordering(190) 00:11:40.794 fused_ordering(191) 00:11:40.794 fused_ordering(192) 00:11:40.794 fused_ordering(193) 00:11:40.794 fused_ordering(194) 00:11:40.794 fused_ordering(195) 00:11:40.794 fused_ordering(196) 00:11:40.794 fused_ordering(197) 00:11:40.794 fused_ordering(198) 00:11:40.794 fused_ordering(199) 00:11:40.794 fused_ordering(200) 00:11:40.794 fused_ordering(201) 00:11:40.794 fused_ordering(202) 00:11:40.794 fused_ordering(203) 00:11:40.794 fused_ordering(204) 00:11:40.794 fused_ordering(205) 00:11:41.053 fused_ordering(206) 00:11:41.053 fused_ordering(207) 00:11:41.053 fused_ordering(208) 00:11:41.053 fused_ordering(209) 00:11:41.053 fused_ordering(210) 00:11:41.053 fused_ordering(211) 00:11:41.053 fused_ordering(212) 00:11:41.053 fused_ordering(213) 00:11:41.053 fused_ordering(214) 00:11:41.053 fused_ordering(215) 00:11:41.053 fused_ordering(216) 00:11:41.053 fused_ordering(217) 00:11:41.053 fused_ordering(218) 00:11:41.053 fused_ordering(219) 00:11:41.053 fused_ordering(220) 00:11:41.053 fused_ordering(221) 00:11:41.053 fused_ordering(222) 00:11:41.053 fused_ordering(223) 00:11:41.053 fused_ordering(224) 00:11:41.053 fused_ordering(225) 00:11:41.053 fused_ordering(226) 00:11:41.053 fused_ordering(227) 00:11:41.053 fused_ordering(228) 00:11:41.053 fused_ordering(229) 00:11:41.053 fused_ordering(230) 00:11:41.053 fused_ordering(231) 00:11:41.053 fused_ordering(232) 00:11:41.053 fused_ordering(233) 00:11:41.053 fused_ordering(234) 00:11:41.053 fused_ordering(235) 00:11:41.053 fused_ordering(236) 00:11:41.053 fused_ordering(237) 00:11:41.053 fused_ordering(238) 00:11:41.053 fused_ordering(239) 00:11:41.053 fused_ordering(240) 00:11:41.053 fused_ordering(241) 00:11:41.053 fused_ordering(242) 00:11:41.053 fused_ordering(243) 00:11:41.053 fused_ordering(244) 00:11:41.053 fused_ordering(245) 00:11:41.053 fused_ordering(246) 00:11:41.053 fused_ordering(247) 00:11:41.053 fused_ordering(248) 00:11:41.053 fused_ordering(249) 00:11:41.053 fused_ordering(250) 00:11:41.053 fused_ordering(251) 00:11:41.053 fused_ordering(252) 00:11:41.053 fused_ordering(253) 00:11:41.053 fused_ordering(254) 00:11:41.053 fused_ordering(255) 00:11:41.053 fused_ordering(256) 00:11:41.053 fused_ordering(257) 00:11:41.053 fused_ordering(258) 00:11:41.053 fused_ordering(259) 00:11:41.053 fused_ordering(260) 00:11:41.053 fused_ordering(261) 00:11:41.053 fused_ordering(262) 00:11:41.053 fused_ordering(263) 00:11:41.053 fused_ordering(264) 00:11:41.053 fused_ordering(265) 00:11:41.053 fused_ordering(266) 00:11:41.053 fused_ordering(267) 00:11:41.053 fused_ordering(268) 00:11:41.053 fused_ordering(269) 00:11:41.053 fused_ordering(270) 00:11:41.053 fused_ordering(271) 00:11:41.053 fused_ordering(272) 00:11:41.053 fused_ordering(273) 00:11:41.053 fused_ordering(274) 00:11:41.053 fused_ordering(275) 00:11:41.053 fused_ordering(276) 00:11:41.053 fused_ordering(277) 00:11:41.053 fused_ordering(278) 00:11:41.053 fused_ordering(279) 00:11:41.053 fused_ordering(280) 00:11:41.053 fused_ordering(281) 00:11:41.054 fused_ordering(282) 00:11:41.054 fused_ordering(283) 00:11:41.054 fused_ordering(284) 00:11:41.054 fused_ordering(285) 00:11:41.054 fused_ordering(286) 00:11:41.054 fused_ordering(287) 00:11:41.054 fused_ordering(288) 00:11:41.054 fused_ordering(289) 00:11:41.054 fused_ordering(290) 00:11:41.054 fused_ordering(291) 00:11:41.054 fused_ordering(292) 00:11:41.054 fused_ordering(293) 00:11:41.054 fused_ordering(294) 00:11:41.054 fused_ordering(295) 00:11:41.054 fused_ordering(296) 00:11:41.054 fused_ordering(297) 00:11:41.054 fused_ordering(298) 00:11:41.054 fused_ordering(299) 00:11:41.054 fused_ordering(300) 00:11:41.054 fused_ordering(301) 00:11:41.054 fused_ordering(302) 00:11:41.054 fused_ordering(303) 00:11:41.054 fused_ordering(304) 00:11:41.054 fused_ordering(305) 00:11:41.054 fused_ordering(306) 00:11:41.054 fused_ordering(307) 00:11:41.054 fused_ordering(308) 00:11:41.054 fused_ordering(309) 00:11:41.054 fused_ordering(310) 00:11:41.054 fused_ordering(311) 00:11:41.054 fused_ordering(312) 00:11:41.054 fused_ordering(313) 00:11:41.054 fused_ordering(314) 00:11:41.054 fused_ordering(315) 00:11:41.054 fused_ordering(316) 00:11:41.054 fused_ordering(317) 00:11:41.054 fused_ordering(318) 00:11:41.054 fused_ordering(319) 00:11:41.054 fused_ordering(320) 00:11:41.054 fused_ordering(321) 00:11:41.054 fused_ordering(322) 00:11:41.054 fused_ordering(323) 00:11:41.054 fused_ordering(324) 00:11:41.054 fused_ordering(325) 00:11:41.054 fused_ordering(326) 00:11:41.054 fused_ordering(327) 00:11:41.054 fused_ordering(328) 00:11:41.054 fused_ordering(329) 00:11:41.054 fused_ordering(330) 00:11:41.054 fused_ordering(331) 00:11:41.054 fused_ordering(332) 00:11:41.054 fused_ordering(333) 00:11:41.054 fused_ordering(334) 00:11:41.054 fused_ordering(335) 00:11:41.054 fused_ordering(336) 00:11:41.054 fused_ordering(337) 00:11:41.054 fused_ordering(338) 00:11:41.054 fused_ordering(339) 00:11:41.054 fused_ordering(340) 00:11:41.054 fused_ordering(341) 00:11:41.054 fused_ordering(342) 00:11:41.054 fused_ordering(343) 00:11:41.054 fused_ordering(344) 00:11:41.054 fused_ordering(345) 00:11:41.054 fused_ordering(346) 00:11:41.054 fused_ordering(347) 00:11:41.054 fused_ordering(348) 00:11:41.054 fused_ordering(349) 00:11:41.054 fused_ordering(350) 00:11:41.054 fused_ordering(351) 00:11:41.054 fused_ordering(352) 00:11:41.054 fused_ordering(353) 00:11:41.054 fused_ordering(354) 00:11:41.054 fused_ordering(355) 00:11:41.054 fused_ordering(356) 00:11:41.054 fused_ordering(357) 00:11:41.054 fused_ordering(358) 00:11:41.054 fused_ordering(359) 00:11:41.054 fused_ordering(360) 00:11:41.054 fused_ordering(361) 00:11:41.054 fused_ordering(362) 00:11:41.054 fused_ordering(363) 00:11:41.054 fused_ordering(364) 00:11:41.054 fused_ordering(365) 00:11:41.054 fused_ordering(366) 00:11:41.054 fused_ordering(367) 00:11:41.054 fused_ordering(368) 00:11:41.054 fused_ordering(369) 00:11:41.054 fused_ordering(370) 00:11:41.054 fused_ordering(371) 00:11:41.054 fused_ordering(372) 00:11:41.054 fused_ordering(373) 00:11:41.054 fused_ordering(374) 00:11:41.054 fused_ordering(375) 00:11:41.054 fused_ordering(376) 00:11:41.054 fused_ordering(377) 00:11:41.054 fused_ordering(378) 00:11:41.054 fused_ordering(379) 00:11:41.054 fused_ordering(380) 00:11:41.054 fused_ordering(381) 00:11:41.054 fused_ordering(382) 00:11:41.054 fused_ordering(383) 00:11:41.054 fused_ordering(384) 00:11:41.054 fused_ordering(385) 00:11:41.054 fused_ordering(386) 00:11:41.054 fused_ordering(387) 00:11:41.054 fused_ordering(388) 00:11:41.054 fused_ordering(389) 00:11:41.054 fused_ordering(390) 00:11:41.054 fused_ordering(391) 00:11:41.054 fused_ordering(392) 00:11:41.054 fused_ordering(393) 00:11:41.054 fused_ordering(394) 00:11:41.054 fused_ordering(395) 00:11:41.054 fused_ordering(396) 00:11:41.054 fused_ordering(397) 00:11:41.054 fused_ordering(398) 00:11:41.054 fused_ordering(399) 00:11:41.054 fused_ordering(400) 00:11:41.054 fused_ordering(401) 00:11:41.054 fused_ordering(402) 00:11:41.054 fused_ordering(403) 00:11:41.054 fused_ordering(404) 00:11:41.054 fused_ordering(405) 00:11:41.054 fused_ordering(406) 00:11:41.054 fused_ordering(407) 00:11:41.054 fused_ordering(408) 00:11:41.054 fused_ordering(409) 00:11:41.054 fused_ordering(410) 00:11:41.313 fused_ordering(411) 00:11:41.313 fused_ordering(412) 00:11:41.313 fused_ordering(413) 00:11:41.313 fused_ordering(414) 00:11:41.313 fused_ordering(415) 00:11:41.313 fused_ordering(416) 00:11:41.313 fused_ordering(417) 00:11:41.313 fused_ordering(418) 00:11:41.313 fused_ordering(419) 00:11:41.313 fused_ordering(420) 00:11:41.313 fused_ordering(421) 00:11:41.313 fused_ordering(422) 00:11:41.313 fused_ordering(423) 00:11:41.313 fused_ordering(424) 00:11:41.313 fused_ordering(425) 00:11:41.313 fused_ordering(426) 00:11:41.313 fused_ordering(427) 00:11:41.313 fused_ordering(428) 00:11:41.313 fused_ordering(429) 00:11:41.313 fused_ordering(430) 00:11:41.313 fused_ordering(431) 00:11:41.313 fused_ordering(432) 00:11:41.313 fused_ordering(433) 00:11:41.313 fused_ordering(434) 00:11:41.313 fused_ordering(435) 00:11:41.313 fused_ordering(436) 00:11:41.313 fused_ordering(437) 00:11:41.313 fused_ordering(438) 00:11:41.313 fused_ordering(439) 00:11:41.313 fused_ordering(440) 00:11:41.313 fused_ordering(441) 00:11:41.313 fused_ordering(442) 00:11:41.313 fused_ordering(443) 00:11:41.313 fused_ordering(444) 00:11:41.313 fused_ordering(445) 00:11:41.313 fused_ordering(446) 00:11:41.313 fused_ordering(447) 00:11:41.313 fused_ordering(448) 00:11:41.313 fused_ordering(449) 00:11:41.313 fused_ordering(450) 00:11:41.313 fused_ordering(451) 00:11:41.313 fused_ordering(452) 00:11:41.313 fused_ordering(453) 00:11:41.313 fused_ordering(454) 00:11:41.313 fused_ordering(455) 00:11:41.313 fused_ordering(456) 00:11:41.313 fused_ordering(457) 00:11:41.313 fused_ordering(458) 00:11:41.313 fused_ordering(459) 00:11:41.313 fused_ordering(460) 00:11:41.313 fused_ordering(461) 00:11:41.313 fused_ordering(462) 00:11:41.313 fused_ordering(463) 00:11:41.313 fused_ordering(464) 00:11:41.313 fused_ordering(465) 00:11:41.313 fused_ordering(466) 00:11:41.313 fused_ordering(467) 00:11:41.313 fused_ordering(468) 00:11:41.313 fused_ordering(469) 00:11:41.313 fused_ordering(470) 00:11:41.313 fused_ordering(471) 00:11:41.313 fused_ordering(472) 00:11:41.313 fused_ordering(473) 00:11:41.313 fused_ordering(474) 00:11:41.313 fused_ordering(475) 00:11:41.313 fused_ordering(476) 00:11:41.313 fused_ordering(477) 00:11:41.313 fused_ordering(478) 00:11:41.313 fused_ordering(479) 00:11:41.313 fused_ordering(480) 00:11:41.313 fused_ordering(481) 00:11:41.313 fused_ordering(482) 00:11:41.313 fused_ordering(483) 00:11:41.313 fused_ordering(484) 00:11:41.313 fused_ordering(485) 00:11:41.313 fused_ordering(486) 00:11:41.313 fused_ordering(487) 00:11:41.313 fused_ordering(488) 00:11:41.313 fused_ordering(489) 00:11:41.313 fused_ordering(490) 00:11:41.313 fused_ordering(491) 00:11:41.313 fused_ordering(492) 00:11:41.313 fused_ordering(493) 00:11:41.313 fused_ordering(494) 00:11:41.313 fused_ordering(495) 00:11:41.313 fused_ordering(496) 00:11:41.313 fused_ordering(497) 00:11:41.313 fused_ordering(498) 00:11:41.313 fused_ordering(499) 00:11:41.313 fused_ordering(500) 00:11:41.313 fused_ordering(501) 00:11:41.313 fused_ordering(502) 00:11:41.313 fused_ordering(503) 00:11:41.313 fused_ordering(504) 00:11:41.313 fused_ordering(505) 00:11:41.313 fused_ordering(506) 00:11:41.314 fused_ordering(507) 00:11:41.314 fused_ordering(508) 00:11:41.314 fused_ordering(509) 00:11:41.314 fused_ordering(510) 00:11:41.314 fused_ordering(511) 00:11:41.314 fused_ordering(512) 00:11:41.314 fused_ordering(513) 00:11:41.314 fused_ordering(514) 00:11:41.314 fused_ordering(515) 00:11:41.314 fused_ordering(516) 00:11:41.314 fused_ordering(517) 00:11:41.314 fused_ordering(518) 00:11:41.314 fused_ordering(519) 00:11:41.314 fused_ordering(520) 00:11:41.314 fused_ordering(521) 00:11:41.314 fused_ordering(522) 00:11:41.314 fused_ordering(523) 00:11:41.314 fused_ordering(524) 00:11:41.314 fused_ordering(525) 00:11:41.314 fused_ordering(526) 00:11:41.314 fused_ordering(527) 00:11:41.314 fused_ordering(528) 00:11:41.314 fused_ordering(529) 00:11:41.314 fused_ordering(530) 00:11:41.314 fused_ordering(531) 00:11:41.314 fused_ordering(532) 00:11:41.314 fused_ordering(533) 00:11:41.314 fused_ordering(534) 00:11:41.314 fused_ordering(535) 00:11:41.314 fused_ordering(536) 00:11:41.314 fused_ordering(537) 00:11:41.314 fused_ordering(538) 00:11:41.314 fused_ordering(539) 00:11:41.314 fused_ordering(540) 00:11:41.314 fused_ordering(541) 00:11:41.314 fused_ordering(542) 00:11:41.314 fused_ordering(543) 00:11:41.314 fused_ordering(544) 00:11:41.314 fused_ordering(545) 00:11:41.314 fused_ordering(546) 00:11:41.314 fused_ordering(547) 00:11:41.314 fused_ordering(548) 00:11:41.314 fused_ordering(549) 00:11:41.314 fused_ordering(550) 00:11:41.314 fused_ordering(551) 00:11:41.314 fused_ordering(552) 00:11:41.314 fused_ordering(553) 00:11:41.314 fused_ordering(554) 00:11:41.314 fused_ordering(555) 00:11:41.314 fused_ordering(556) 00:11:41.314 fused_ordering(557) 00:11:41.314 fused_ordering(558) 00:11:41.314 fused_ordering(559) 00:11:41.314 fused_ordering(560) 00:11:41.314 fused_ordering(561) 00:11:41.314 fused_ordering(562) 00:11:41.314 fused_ordering(563) 00:11:41.314 fused_ordering(564) 00:11:41.314 fused_ordering(565) 00:11:41.314 fused_ordering(566) 00:11:41.314 fused_ordering(567) 00:11:41.314 fused_ordering(568) 00:11:41.314 fused_ordering(569) 00:11:41.314 fused_ordering(570) 00:11:41.314 fused_ordering(571) 00:11:41.314 fused_ordering(572) 00:11:41.314 fused_ordering(573) 00:11:41.314 fused_ordering(574) 00:11:41.314 fused_ordering(575) 00:11:41.314 fused_ordering(576) 00:11:41.314 fused_ordering(577) 00:11:41.314 fused_ordering(578) 00:11:41.314 fused_ordering(579) 00:11:41.314 fused_ordering(580) 00:11:41.314 fused_ordering(581) 00:11:41.314 fused_ordering(582) 00:11:41.314 fused_ordering(583) 00:11:41.314 fused_ordering(584) 00:11:41.314 fused_ordering(585) 00:11:41.314 fused_ordering(586) 00:11:41.314 fused_ordering(587) 00:11:41.314 fused_ordering(588) 00:11:41.314 fused_ordering(589) 00:11:41.314 fused_ordering(590) 00:11:41.314 fused_ordering(591) 00:11:41.314 fused_ordering(592) 00:11:41.314 fused_ordering(593) 00:11:41.314 fused_ordering(594) 00:11:41.314 fused_ordering(595) 00:11:41.314 fused_ordering(596) 00:11:41.314 fused_ordering(597) 00:11:41.314 fused_ordering(598) 00:11:41.314 fused_ordering(599) 00:11:41.314 fused_ordering(600) 00:11:41.314 fused_ordering(601) 00:11:41.314 fused_ordering(602) 00:11:41.314 fused_ordering(603) 00:11:41.314 fused_ordering(604) 00:11:41.314 fused_ordering(605) 00:11:41.314 fused_ordering(606) 00:11:41.314 fused_ordering(607) 00:11:41.314 fused_ordering(608) 00:11:41.314 fused_ordering(609) 00:11:41.314 fused_ordering(610) 00:11:41.314 fused_ordering(611) 00:11:41.314 fused_ordering(612) 00:11:41.314 fused_ordering(613) 00:11:41.314 fused_ordering(614) 00:11:41.314 fused_ordering(615) 00:11:41.935 fused_ordering(616) 00:11:41.935 fused_ordering(617) 00:11:41.935 fused_ordering(618) 00:11:41.935 fused_ordering(619) 00:11:41.935 fused_ordering(620) 00:11:41.935 fused_ordering(621) 00:11:41.935 fused_ordering(622) 00:11:41.935 fused_ordering(623) 00:11:41.935 fused_ordering(624) 00:11:41.935 fused_ordering(625) 00:11:41.935 fused_ordering(626) 00:11:41.935 fused_ordering(627) 00:11:41.935 fused_ordering(628) 00:11:41.935 fused_ordering(629) 00:11:41.935 fused_ordering(630) 00:11:41.935 fused_ordering(631) 00:11:41.935 fused_ordering(632) 00:11:41.935 fused_ordering(633) 00:11:41.935 fused_ordering(634) 00:11:41.935 fused_ordering(635) 00:11:41.935 fused_ordering(636) 00:11:41.935 fused_ordering(637) 00:11:41.935 fused_ordering(638) 00:11:41.935 fused_ordering(639) 00:11:41.935 fused_ordering(640) 00:11:41.935 fused_ordering(641) 00:11:41.935 fused_ordering(642) 00:11:41.935 fused_ordering(643) 00:11:41.935 fused_ordering(644) 00:11:41.935 fused_ordering(645) 00:11:41.935 fused_ordering(646) 00:11:41.935 fused_ordering(647) 00:11:41.935 fused_ordering(648) 00:11:41.935 fused_ordering(649) 00:11:41.935 fused_ordering(650) 00:11:41.935 fused_ordering(651) 00:11:41.935 fused_ordering(652) 00:11:41.935 fused_ordering(653) 00:11:41.935 fused_ordering(654) 00:11:41.935 fused_ordering(655) 00:11:41.935 fused_ordering(656) 00:11:41.935 fused_ordering(657) 00:11:41.935 fused_ordering(658) 00:11:41.935 fused_ordering(659) 00:11:41.935 fused_ordering(660) 00:11:41.935 fused_ordering(661) 00:11:41.935 fused_ordering(662) 00:11:41.935 fused_ordering(663) 00:11:41.935 fused_ordering(664) 00:11:41.935 fused_ordering(665) 00:11:41.935 fused_ordering(666) 00:11:41.935 fused_ordering(667) 00:11:41.935 fused_ordering(668) 00:11:41.935 fused_ordering(669) 00:11:41.935 fused_ordering(670) 00:11:41.935 fused_ordering(671) 00:11:41.935 fused_ordering(672) 00:11:41.935 fused_ordering(673) 00:11:41.935 fused_ordering(674) 00:11:41.935 fused_ordering(675) 00:11:41.935 fused_ordering(676) 00:11:41.935 fused_ordering(677) 00:11:41.935 fused_ordering(678) 00:11:41.935 fused_ordering(679) 00:11:41.935 fused_ordering(680) 00:11:41.935 fused_ordering(681) 00:11:41.935 fused_ordering(682) 00:11:41.935 fused_ordering(683) 00:11:41.935 fused_ordering(684) 00:11:41.935 fused_ordering(685) 00:11:41.935 fused_ordering(686) 00:11:41.935 fused_ordering(687) 00:11:41.935 fused_ordering(688) 00:11:41.935 fused_ordering(689) 00:11:41.935 fused_ordering(690) 00:11:41.935 fused_ordering(691) 00:11:41.935 fused_ordering(692) 00:11:41.935 fused_ordering(693) 00:11:41.935 fused_ordering(694) 00:11:41.935 fused_ordering(695) 00:11:41.935 fused_ordering(696) 00:11:41.935 fused_ordering(697) 00:11:41.935 fused_ordering(698) 00:11:41.935 fused_ordering(699) 00:11:41.935 fused_ordering(700) 00:11:41.935 fused_ordering(701) 00:11:41.935 fused_ordering(702) 00:11:41.935 fused_ordering(703) 00:11:41.935 fused_ordering(704) 00:11:41.935 fused_ordering(705) 00:11:41.935 fused_ordering(706) 00:11:41.935 fused_ordering(707) 00:11:41.935 fused_ordering(708) 00:11:41.935 fused_ordering(709) 00:11:41.935 fused_ordering(710) 00:11:41.935 fused_ordering(711) 00:11:41.935 fused_ordering(712) 00:11:41.935 fused_ordering(713) 00:11:41.935 fused_ordering(714) 00:11:41.935 fused_ordering(715) 00:11:41.935 fused_ordering(716) 00:11:41.935 fused_ordering(717) 00:11:41.935 fused_ordering(718) 00:11:41.935 fused_ordering(719) 00:11:41.935 fused_ordering(720) 00:11:41.935 fused_ordering(721) 00:11:41.935 fused_ordering(722) 00:11:41.935 fused_ordering(723) 00:11:41.935 fused_ordering(724) 00:11:41.935 fused_ordering(725) 00:11:41.936 fused_ordering(726) 00:11:41.936 fused_ordering(727) 00:11:41.936 fused_ordering(728) 00:11:41.936 fused_ordering(729) 00:11:41.936 fused_ordering(730) 00:11:41.936 fused_ordering(731) 00:11:41.936 fused_ordering(732) 00:11:41.936 fused_ordering(733) 00:11:41.936 fused_ordering(734) 00:11:41.936 fused_ordering(735) 00:11:41.936 fused_ordering(736) 00:11:41.936 fused_ordering(737) 00:11:41.936 fused_ordering(738) 00:11:41.936 fused_ordering(739) 00:11:41.936 fused_ordering(740) 00:11:41.936 fused_ordering(741) 00:11:41.936 fused_ordering(742) 00:11:41.936 fused_ordering(743) 00:11:41.936 fused_ordering(744) 00:11:41.936 fused_ordering(745) 00:11:41.936 fused_ordering(746) 00:11:41.936 fused_ordering(747) 00:11:41.936 fused_ordering(748) 00:11:41.936 fused_ordering(749) 00:11:41.936 fused_ordering(750) 00:11:41.936 fused_ordering(751) 00:11:41.936 fused_ordering(752) 00:11:41.936 fused_ordering(753) 00:11:41.936 fused_ordering(754) 00:11:41.936 fused_ordering(755) 00:11:41.936 fused_ordering(756) 00:11:41.936 fused_ordering(757) 00:11:41.936 fused_ordering(758) 00:11:41.936 fused_ordering(759) 00:11:41.936 fused_ordering(760) 00:11:41.936 fused_ordering(761) 00:11:41.936 fused_ordering(762) 00:11:41.936 fused_ordering(763) 00:11:41.936 fused_ordering(764) 00:11:41.936 fused_ordering(765) 00:11:41.936 fused_ordering(766) 00:11:41.936 fused_ordering(767) 00:11:41.936 fused_ordering(768) 00:11:41.936 fused_ordering(769) 00:11:41.936 fused_ordering(770) 00:11:41.936 fused_ordering(771) 00:11:41.936 fused_ordering(772) 00:11:41.936 fused_ordering(773) 00:11:41.936 fused_ordering(774) 00:11:41.936 fused_ordering(775) 00:11:41.936 fused_ordering(776) 00:11:41.936 fused_ordering(777) 00:11:41.936 fused_ordering(778) 00:11:41.936 fused_ordering(779) 00:11:41.936 fused_ordering(780) 00:11:41.936 fused_ordering(781) 00:11:41.936 fused_ordering(782) 00:11:41.936 fused_ordering(783) 00:11:41.936 fused_ordering(784) 00:11:41.936 fused_ordering(785) 00:11:41.936 fused_ordering(786) 00:11:41.936 fused_ordering(787) 00:11:41.936 fused_ordering(788) 00:11:41.936 fused_ordering(789) 00:11:41.936 fused_ordering(790) 00:11:41.936 fused_ordering(791) 00:11:41.936 fused_ordering(792) 00:11:41.936 fused_ordering(793) 00:11:41.936 fused_ordering(794) 00:11:41.936 fused_ordering(795) 00:11:41.936 fused_ordering(796) 00:11:41.936 fused_ordering(797) 00:11:41.936 fused_ordering(798) 00:11:41.936 fused_ordering(799) 00:11:41.936 fused_ordering(800) 00:11:41.936 fused_ordering(801) 00:11:41.936 fused_ordering(802) 00:11:41.936 fused_ordering(803) 00:11:41.936 fused_ordering(804) 00:11:41.936 fused_ordering(805) 00:11:41.936 fused_ordering(806) 00:11:41.936 fused_ordering(807) 00:11:41.936 fused_ordering(808) 00:11:41.936 fused_ordering(809) 00:11:41.936 fused_ordering(810) 00:11:41.936 fused_ordering(811) 00:11:41.936 fused_ordering(812) 00:11:41.936 fused_ordering(813) 00:11:41.936 fused_ordering(814) 00:11:41.936 fused_ordering(815) 00:11:41.936 fused_ordering(816) 00:11:41.936 fused_ordering(817) 00:11:41.936 fused_ordering(818) 00:11:41.936 fused_ordering(819) 00:11:41.936 fused_ordering(820) 00:11:42.506 fused_ordering(821) 00:11:42.506 fused_ordering(822) 00:11:42.506 fused_ordering(823) 00:11:42.506 fused_ordering(824) 00:11:42.506 fused_ordering(825) 00:11:42.506 fused_ordering(826) 00:11:42.506 fused_ordering(827) 00:11:42.506 fused_ordering(828) 00:11:42.506 fused_ordering(829) 00:11:42.506 fused_ordering(830) 00:11:42.506 fused_ordering(831) 00:11:42.506 fused_ordering(832) 00:11:42.506 fused_ordering(833) 00:11:42.506 fused_ordering(834) 00:11:42.506 fused_ordering(835) 00:11:42.506 fused_ordering(836) 00:11:42.506 fused_ordering(837) 00:11:42.506 fused_ordering(838) 00:11:42.506 fused_ordering(839) 00:11:42.506 fused_ordering(840) 00:11:42.506 fused_ordering(841) 00:11:42.506 fused_ordering(842) 00:11:42.506 fused_ordering(843) 00:11:42.506 fused_ordering(844) 00:11:42.506 fused_ordering(845) 00:11:42.506 fused_ordering(846) 00:11:42.506 fused_ordering(847) 00:11:42.506 fused_ordering(848) 00:11:42.506 fused_ordering(849) 00:11:42.506 fused_ordering(850) 00:11:42.506 fused_ordering(851) 00:11:42.506 fused_ordering(852) 00:11:42.506 fused_ordering(853) 00:11:42.506 fused_ordering(854) 00:11:42.506 fused_ordering(855) 00:11:42.506 fused_ordering(856) 00:11:42.506 fused_ordering(857) 00:11:42.506 fused_ordering(858) 00:11:42.506 fused_ordering(859) 00:11:42.506 fused_ordering(860) 00:11:42.506 fused_ordering(861) 00:11:42.506 fused_ordering(862) 00:11:42.506 fused_ordering(863) 00:11:42.506 fused_ordering(864) 00:11:42.506 fused_ordering(865) 00:11:42.506 fused_ordering(866) 00:11:42.506 fused_ordering(867) 00:11:42.506 fused_ordering(868) 00:11:42.506 fused_ordering(869) 00:11:42.506 fused_ordering(870) 00:11:42.506 fused_ordering(871) 00:11:42.506 fused_ordering(872) 00:11:42.506 fused_ordering(873) 00:11:42.506 fused_ordering(874) 00:11:42.506 fused_ordering(875) 00:11:42.506 fused_ordering(876) 00:11:42.506 fused_ordering(877) 00:11:42.506 fused_ordering(878) 00:11:42.506 fused_ordering(879) 00:11:42.506 fused_ordering(880) 00:11:42.506 fused_ordering(881) 00:11:42.506 fused_ordering(882) 00:11:42.506 fused_ordering(883) 00:11:42.506 fused_ordering(884) 00:11:42.506 fused_ordering(885) 00:11:42.506 fused_ordering(886) 00:11:42.506 fused_ordering(887) 00:11:42.506 fused_ordering(888) 00:11:42.506 fused_ordering(889) 00:11:42.506 fused_ordering(890) 00:11:42.506 fused_ordering(891) 00:11:42.506 fused_ordering(892) 00:11:42.506 fused_ordering(893) 00:11:42.506 fused_ordering(894) 00:11:42.506 fused_ordering(895) 00:11:42.506 fused_ordering(896) 00:11:42.506 fused_ordering(897) 00:11:42.506 fused_ordering(898) 00:11:42.506 fused_ordering(899) 00:11:42.506 fused_ordering(900) 00:11:42.506 fused_ordering(901) 00:11:42.506 fused_ordering(902) 00:11:42.506 fused_ordering(903) 00:11:42.506 fused_ordering(904) 00:11:42.506 fused_ordering(905) 00:11:42.506 fused_ordering(906) 00:11:42.506 fused_ordering(907) 00:11:42.506 fused_ordering(908) 00:11:42.506 fused_ordering(909) 00:11:42.506 fused_ordering(910) 00:11:42.506 fused_ordering(911) 00:11:42.506 fused_ordering(912) 00:11:42.506 fused_ordering(913) 00:11:42.506 fused_ordering(914) 00:11:42.506 fused_ordering(915) 00:11:42.506 fused_ordering(916) 00:11:42.506 fused_ordering(917) 00:11:42.506 fused_ordering(918) 00:11:42.506 fused_ordering(919) 00:11:42.506 fused_ordering(920) 00:11:42.506 fused_ordering(921) 00:11:42.506 fused_ordering(922) 00:11:42.506 fused_ordering(923) 00:11:42.506 fused_ordering(924) 00:11:42.506 fused_ordering(925) 00:11:42.506 fused_ordering(926) 00:11:42.506 fused_ordering(927) 00:11:42.506 fused_ordering(928) 00:11:42.506 fused_ordering(929) 00:11:42.506 fused_ordering(930) 00:11:42.506 fused_ordering(931) 00:11:42.506 fused_ordering(932) 00:11:42.506 fused_ordering(933) 00:11:42.506 fused_ordering(934) 00:11:42.506 fused_ordering(935) 00:11:42.506 fused_ordering(936) 00:11:42.506 fused_ordering(937) 00:11:42.506 fused_ordering(938) 00:11:42.506 fused_ordering(939) 00:11:42.506 fused_ordering(940) 00:11:42.506 fused_ordering(941) 00:11:42.506 fused_ordering(942) 00:11:42.506 fused_ordering(943) 00:11:42.506 fused_ordering(944) 00:11:42.506 fused_ordering(945) 00:11:42.506 fused_ordering(946) 00:11:42.506 fused_ordering(947) 00:11:42.506 fused_ordering(948) 00:11:42.506 fused_ordering(949) 00:11:42.506 fused_ordering(950) 00:11:42.506 fused_ordering(951) 00:11:42.506 fused_ordering(952) 00:11:42.506 fused_ordering(953) 00:11:42.506 fused_ordering(954) 00:11:42.506 fused_ordering(955) 00:11:42.506 fused_ordering(956) 00:11:42.506 fused_ordering(957) 00:11:42.506 fused_ordering(958) 00:11:42.506 fused_ordering(959) 00:11:42.506 fused_ordering(960) 00:11:42.506 fused_ordering(961) 00:11:42.506 fused_ordering(962) 00:11:42.506 fused_ordering(963) 00:11:42.506 fused_ordering(964) 00:11:42.506 fused_ordering(965) 00:11:42.506 fused_ordering(966) 00:11:42.506 fused_ordering(967) 00:11:42.506 fused_ordering(968) 00:11:42.506 fused_ordering(969) 00:11:42.506 fused_ordering(970) 00:11:42.506 fused_ordering(971) 00:11:42.506 fused_ordering(972) 00:11:42.506 fused_ordering(973) 00:11:42.506 fused_ordering(974) 00:11:42.506 fused_ordering(975) 00:11:42.506 fused_ordering(976) 00:11:42.506 fused_ordering(977) 00:11:42.506 fused_ordering(978) 00:11:42.506 fused_ordering(979) 00:11:42.506 fused_ordering(980) 00:11:42.506 fused_ordering(981) 00:11:42.506 fused_ordering(982) 00:11:42.506 fused_ordering(983) 00:11:42.506 fused_ordering(984) 00:11:42.506 fused_ordering(985) 00:11:42.506 fused_ordering(986) 00:11:42.506 fused_ordering(987) 00:11:42.506 fused_ordering(988) 00:11:42.506 fused_ordering(989) 00:11:42.506 fused_ordering(990) 00:11:42.506 fused_ordering(991) 00:11:42.506 fused_ordering(992) 00:11:42.506 fused_ordering(993) 00:11:42.506 fused_ordering(994) 00:11:42.506 fused_ordering(995) 00:11:42.506 fused_ordering(996) 00:11:42.506 fused_ordering(997) 00:11:42.506 fused_ordering(998) 00:11:42.506 fused_ordering(999) 00:11:42.506 fused_ordering(1000) 00:11:42.506 fused_ordering(1001) 00:11:42.506 fused_ordering(1002) 00:11:42.506 fused_ordering(1003) 00:11:42.506 fused_ordering(1004) 00:11:42.506 fused_ordering(1005) 00:11:42.506 fused_ordering(1006) 00:11:42.506 fused_ordering(1007) 00:11:42.506 fused_ordering(1008) 00:11:42.506 fused_ordering(1009) 00:11:42.506 fused_ordering(1010) 00:11:42.506 fused_ordering(1011) 00:11:42.506 fused_ordering(1012) 00:11:42.506 fused_ordering(1013) 00:11:42.506 fused_ordering(1014) 00:11:42.506 fused_ordering(1015) 00:11:42.506 fused_ordering(1016) 00:11:42.506 fused_ordering(1017) 00:11:42.506 fused_ordering(1018) 00:11:42.506 fused_ordering(1019) 00:11:42.506 fused_ordering(1020) 00:11:42.506 fused_ordering(1021) 00:11:42.506 fused_ordering(1022) 00:11:42.506 fused_ordering(1023) 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.506 rmmod nvme_tcp 00:11:42.506 rmmod nvme_fabrics 00:11:42.506 rmmod nvme_keyring 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 77102 ']' 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 77102 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 77102 ']' 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 77102 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77102 00:11:42.506 killing process with pid 77102 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77102' 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 77102 00:11:42.506 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 77102 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:42.765 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:11:43.023 00:11:43.023 real 0m3.864s 00:11:43.023 user 0m4.348s 00:11:43.023 sys 0m1.432s 00:11:43.023 ************************************ 00:11:43.023 END TEST nvmf_fused_ordering 00:11:43.023 ************************************ 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.023 ************************************ 00:11:43.023 START TEST nvmf_ns_masking 00:11:43.023 ************************************ 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:43.023 * Looking for test storage... 00:11:43.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.023 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.283 --rc genhtml_branch_coverage=1 00:11:43.283 --rc genhtml_function_coverage=1 00:11:43.283 --rc genhtml_legend=1 00:11:43.283 --rc geninfo_all_blocks=1 00:11:43.283 --rc geninfo_unexecuted_blocks=1 00:11:43.283 00:11:43.283 ' 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.283 --rc genhtml_branch_coverage=1 00:11:43.283 --rc genhtml_function_coverage=1 00:11:43.283 --rc genhtml_legend=1 00:11:43.283 --rc geninfo_all_blocks=1 00:11:43.283 --rc geninfo_unexecuted_blocks=1 00:11:43.283 00:11:43.283 ' 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.283 --rc genhtml_branch_coverage=1 00:11:43.283 --rc genhtml_function_coverage=1 00:11:43.283 --rc genhtml_legend=1 00:11:43.283 --rc geninfo_all_blocks=1 00:11:43.283 --rc geninfo_unexecuted_blocks=1 00:11:43.283 00:11:43.283 ' 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.283 --rc genhtml_branch_coverage=1 00:11:43.283 --rc genhtml_function_coverage=1 00:11:43.283 --rc genhtml_legend=1 00:11:43.283 --rc geninfo_all_blocks=1 00:11:43.283 --rc geninfo_unexecuted_blocks=1 00:11:43.283 00:11:43.283 ' 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:11:43.283 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.284 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2d660426-c540-449f-a820-fbf449019945 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3fafbf0f-8dcc-4531-bf16-972da7584a4f 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=22fc2130-9b21-4cdb-bf68-4b3646ba7761 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:43.284 Cannot find device "nvmf_init_br" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:43.284 Cannot find device "nvmf_init_br2" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:43.284 Cannot find device "nvmf_tgt_br" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.284 Cannot find device "nvmf_tgt_br2" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:43.284 Cannot find device "nvmf_init_br" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:43.284 Cannot find device "nvmf_init_br2" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:43.284 Cannot find device "nvmf_tgt_br" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:43.284 Cannot find device "nvmf_tgt_br2" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:43.284 Cannot find device "nvmf_br" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:43.284 Cannot find device "nvmf_init_if" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:43.284 Cannot find device "nvmf_init_if2" 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:11:43.284 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.543 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:43.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:11:43.544 00:11:43.544 --- 10.0.0.3 ping statistics --- 00:11:43.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.544 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:43.544 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:43.544 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:11:43.544 00:11:43.544 --- 10.0.0.4 ping statistics --- 00:11:43.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.544 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:43.544 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:43.803 00:11:43.803 --- 10.0.0.1 ping statistics --- 00:11:43.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.803 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:43.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:11:43.803 00:11:43.803 --- 10.0.0.2 ping statistics --- 00:11:43.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.803 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=77380 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 77380 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 77380 ']' 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.803 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.803 [2024-12-16 11:31:49.061040] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:11:43.803 [2024-12-16 11:31:49.061400] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.803 [2024-12-16 11:31:49.211477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.062 [2024-12-16 11:31:49.268806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.062 [2024-12-16 11:31:49.269064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.062 [2024-12-16 11:31:49.269240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.062 [2024-12-16 11:31:49.269411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.062 [2024-12-16 11:31:49.269533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.062 [2024-12-16 11:31:49.269960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.062 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.062 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:44.062 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.062 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.062 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.062 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.062 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:44.320 [2024-12-16 11:31:49.723130] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.579 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:44.579 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:44.579 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:44.837 Malloc1 00:11:44.837 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:45.096 Malloc2 00:11:45.096 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.663 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:45.921 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:46.180 [2024-12-16 11:31:51.397825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:46.180 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:46.180 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 22fc2130-9b21-4cdb-bf68-4b3646ba7761 -a 10.0.0.3 -s 4420 -i 4 00:11:46.180 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.180 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.180 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.180 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:46.180 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.713 [ 0]:0x1 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=65417c658f19456db3b70efd3a7d4132 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 65417c658f19456db3b70efd3a7d4132 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.713 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.713 [ 0]:0x1 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=65417c658f19456db3b70efd3a7d4132 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 65417c658f19456db3b70efd3a7d4132 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:48.713 [ 1]:0x2 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.713 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.972 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d90f6ff84e04952a035f762a0855371 00:11:48.972 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d90f6ff84e04952a035f762a0855371 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.972 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:48.972 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.972 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.230 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:49.489 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:49.489 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 22fc2130-9b21-4cdb-bf68-4b3646ba7761 -a 10.0.0.3 -s 4420 -i 4 00:11:49.751 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:49.751 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:49.751 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.751 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:49.751 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:49.751 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:51.658 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:51.658 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:51.658 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.658 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:51.658 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.658 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:51.658 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.658 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.927 [ 0]:0x2 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d90f6ff84e04952a035f762a0855371 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d90f6ff84e04952a035f762a0855371 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.927 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:52.185 [ 0]:0x1 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=65417c658f19456db3b70efd3a7d4132 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 65417c658f19456db3b70efd3a7d4132 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.185 [ 1]:0x2 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d90f6ff84e04952a035f762a0855371 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d90f6ff84e04952a035f762a0855371 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.185 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:52.752 [ 0]:0x2 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d90f6ff84e04952a035f762a0855371 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d90f6ff84e04952a035f762a0855371 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:52.752 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.752 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:53.011 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:53.011 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 22fc2130-9b21-4cdb-bf68-4b3646ba7761 -a 10.0.0.3 -s 4420 -i 4 00:11:53.269 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:53.269 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.270 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.270 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:53.270 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:53.270 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:55.170 [ 0]:0x1 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=65417c658f19456db3b70efd3a7d4132 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 65417c658f19456db3b70efd3a7d4132 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:55.170 [ 1]:0x2 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:55.170 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.429 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d90f6ff84e04952a035f762a0855371 00:11:55.429 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d90f6ff84e04952a035f762a0855371 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.429 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:55.688 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:55.688 [ 0]:0x2 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d90f6ff84e04952a035f762a0855371 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d90f6ff84e04952a035f762a0855371 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:55.688 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.255 [2024-12-16 11:32:01.372147] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:56.255 2024/12/16 11:32:01 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:11:56.255 request: 00:11:56.255 { 00:11:56.255 "method": "nvmf_ns_remove_host", 00:11:56.255 "params": { 00:11:56.255 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.255 "nsid": 2, 00:11:56.255 "host": "nqn.2016-06.io.spdk:host1" 00:11:56.255 } 00:11:56.255 } 00:11:56.255 Got JSON-RPC error response 00:11:56.256 GoRPCClient: error on JSON-RPC call 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.256 [ 0]:0x2 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d90f6ff84e04952a035f762a0855371 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d90f6ff84e04952a035f762a0855371 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=77757 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 77757 /var/tmp/host.sock 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 77757 ']' 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.256 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:56.256 [2024-12-16 11:32:01.607236] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:11:56.256 [2024-12-16 11:32:01.607325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77757 ] 00:11:56.514 [2024-12-16 11:32:01.756116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.514 [2024-12-16 11:32:01.824092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.449 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.449 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:57.449 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.707 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.966 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2d660426-c540-449f-a820-fbf449019945 00:11:57.966 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:57.966 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2D660426C540449FA820FBF449019945 -i 00:11:58.225 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3fafbf0f-8dcc-4531-bf16-972da7584a4f 00:11:58.225 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:58.225 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3FAFBF0F8DCC4531BF16972DA7584A4F -i 00:11:58.792 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:59.051 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:59.310 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.310 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.568 nvme0n1 00:11:59.569 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.569 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:00.146 nvme1n2 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:00.146 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:00.714 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2d660426-c540-449f-a820-fbf449019945 == \2\d\6\6\0\4\2\6\-\c\5\4\0\-\4\4\9\f\-\a\8\2\0\-\f\b\f\4\4\9\0\1\9\9\4\5 ]] 00:12:00.714 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:00.714 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:00.714 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:00.714 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3fafbf0f-8dcc-4531-bf16-972da7584a4f == \3\f\a\f\b\f\0\f\-\8\d\c\c\-\4\5\3\1\-\b\f\1\6\-\9\7\2\d\a\7\5\8\4\a\4\f ]] 00:12:00.973 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.233 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2d660426-c540-449f-a820-fbf449019945 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D660426C540449FA820FBF449019945 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D660426C540449FA820FBF449019945 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:01.492 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D660426C540449FA820FBF449019945 00:12:01.750 [2024-12-16 11:32:07.098457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:01.750 [2024-12-16 11:32:07.098520] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:01.750 [2024-12-16 11:32:07.098534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.750 2024/12/16 11:32:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:2D660426C540449FA820FBF449019945 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.750 request: 00:12:01.750 { 00:12:01.750 "method": "nvmf_subsystem_add_ns", 00:12:01.750 "params": { 00:12:01.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.750 "namespace": { 00:12:01.750 "bdev_name": "invalid", 00:12:01.750 "nsid": 1, 00:12:01.750 "nguid": "2D660426C540449FA820FBF449019945", 00:12:01.750 "no_auto_visible": false, 00:12:01.750 "hide_metadata": false 00:12:01.750 } 00:12:01.750 } 00:12:01.750 } 00:12:01.750 Got JSON-RPC error response 00:12:01.750 GoRPCClient: error on JSON-RPC call 00:12:01.750 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.750 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.750 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.750 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.750 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2d660426-c540-449f-a820-fbf449019945 00:12:01.750 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.750 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2D660426C540449FA820FBF449019945 -i 00:12:02.316 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:04.241 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:04.241 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:04.241 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 77757 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 77757 ']' 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 77757 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77757 00:12:04.501 killing process with pid 77757 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77757' 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 77757 00:12:04.501 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 77757 00:12:05.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.328 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.328 rmmod nvme_tcp 00:12:05.328 rmmod nvme_fabrics 00:12:05.586 rmmod nvme_keyring 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 77380 ']' 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 77380 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 77380 ']' 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 77380 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77380 00:12:05.586 killing process with pid 77380 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77380' 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 77380 00:12:05.586 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 77380 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.845 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:12:06.105 00:12:06.105 real 0m22.983s 00:12:06.105 user 0m39.646s 00:12:06.105 sys 0m3.516s 00:12:06.105 ************************************ 00:12:06.105 END TEST nvmf_ns_masking 00:12:06.105 ************************************ 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.105 ************************************ 00:12:06.105 START TEST nvmf_auth_target 00:12:06.105 ************************************ 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:06.105 * Looking for test storage... 00:12:06.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.105 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.365 --rc genhtml_branch_coverage=1 00:12:06.365 --rc genhtml_function_coverage=1 00:12:06.365 --rc genhtml_legend=1 00:12:06.365 --rc geninfo_all_blocks=1 00:12:06.365 --rc geninfo_unexecuted_blocks=1 00:12:06.365 00:12:06.365 ' 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.365 --rc genhtml_branch_coverage=1 00:12:06.365 --rc genhtml_function_coverage=1 00:12:06.365 --rc genhtml_legend=1 00:12:06.365 --rc geninfo_all_blocks=1 00:12:06.365 --rc geninfo_unexecuted_blocks=1 00:12:06.365 00:12:06.365 ' 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.365 --rc genhtml_branch_coverage=1 00:12:06.365 --rc genhtml_function_coverage=1 00:12:06.365 --rc genhtml_legend=1 00:12:06.365 --rc geninfo_all_blocks=1 00:12:06.365 --rc geninfo_unexecuted_blocks=1 00:12:06.365 00:12:06.365 ' 00:12:06.365 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.365 --rc genhtml_branch_coverage=1 00:12:06.365 --rc genhtml_function_coverage=1 00:12:06.365 --rc genhtml_legend=1 00:12:06.366 --rc geninfo_all_blocks=1 00:12:06.366 --rc geninfo_unexecuted_blocks=1 00:12:06.366 00:12:06.366 ' 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.366 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:06.366 Cannot find device "nvmf_init_br" 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:06.366 Cannot find device "nvmf_init_br2" 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:06.366 Cannot find device "nvmf_tgt_br" 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:06.366 Cannot find device "nvmf_tgt_br2" 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:06.366 Cannot find device "nvmf_init_br" 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:06.366 Cannot find device "nvmf_init_br2" 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:06.366 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:06.366 Cannot find device "nvmf_tgt_br" 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:06.367 Cannot find device "nvmf_tgt_br2" 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:06.367 Cannot find device "nvmf_br" 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:06.367 Cannot find device "nvmf_init_if" 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:06.367 Cannot find device "nvmf_init_if2" 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:06.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:06.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:06.367 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:06.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:06.627 00:12:06.627 --- 10.0.0.3 ping statistics --- 00:12:06.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.627 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:06.627 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:06.628 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:06.628 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:12:06.628 00:12:06.628 --- 10.0.0.4 ping statistics --- 00:12:06.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.628 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:06.628 00:12:06.628 --- 10.0.0.1 ping statistics --- 00:12:06.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.628 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:06.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:12:06.628 00:12:06.628 --- 10.0.0.2 ping statistics --- 00:12:06.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.628 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.628 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=78262 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 78262 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 78262 ']' 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.628 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=78297 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4954f57c0af24ecd06e875f3f87c95acf3c53af53c890a4d 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lS6 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4954f57c0af24ecd06e875f3f87c95acf3c53af53c890a4d 0 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4954f57c0af24ecd06e875f3f87c95acf3c53af53c890a4d 0 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4954f57c0af24ecd06e875f3f87c95acf3c53af53c890a4d 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lS6 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lS6 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.lS6 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5142e26dce11dd660dccb497d8cb47d2d924140a5e82db204605886b7b798950 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wCF 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5142e26dce11dd660dccb497d8cb47d2d924140a5e82db204605886b7b798950 3 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5142e26dce11dd660dccb497d8cb47d2d924140a5e82db204605886b7b798950 3 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5142e26dce11dd660dccb497d8cb47d2d924140a5e82db204605886b7b798950 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wCF 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wCF 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.wCF 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d7fb2baa4bbc917d2a5c8557b4a6abaf 00:12:07.196 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.P3L 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d7fb2baa4bbc917d2a5c8557b4a6abaf 1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d7fb2baa4bbc917d2a5c8557b4a6abaf 1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d7fb2baa4bbc917d2a5c8557b4a6abaf 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.P3L 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.P3L 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.P3L 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a519fb4ac5e80a710a473b7a3c66efbd24b5765462985b0b 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dW9 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a519fb4ac5e80a710a473b7a3c66efbd24b5765462985b0b 2 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a519fb4ac5e80a710a473b7a3c66efbd24b5765462985b0b 2 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a519fb4ac5e80a710a473b7a3c66efbd24b5765462985b0b 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dW9 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dW9 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.dW9 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e0ad3da894f1454f34b3227852352892bf39395d7ab73985 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FkJ 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e0ad3da894f1454f34b3227852352892bf39395d7ab73985 2 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e0ad3da894f1454f34b3227852352892bf39395d7ab73985 2 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e0ad3da894f1454f34b3227852352892bf39395d7ab73985 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FkJ 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FkJ 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.FkJ 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fc7283801d9f9e289a36db4db37c10ff 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8BS 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fc7283801d9f9e289a36db4db37c10ff 1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fc7283801d9f9e289a36db4db37c10ff 1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fc7283801d9f9e289a36db4db37c10ff 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8BS 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8BS 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.8BS 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:07.456 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=68654b4a5f6f2ce96be5de9ce02c20dcf062407aa06cf7c8475265670e899541 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Rhy 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 68654b4a5f6f2ce96be5de9ce02c20dcf062407aa06cf7c8475265670e899541 3 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 68654b4a5f6f2ce96be5de9ce02c20dcf062407aa06cf7c8475265670e899541 3 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=68654b4a5f6f2ce96be5de9ce02c20dcf062407aa06cf7c8475265670e899541 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Rhy 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Rhy 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Rhy 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 78262 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 78262 ']' 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.737 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 78297 /var/tmp/host.sock 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 78297 ']' 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.996 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lS6 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.lS6 00:12:08.254 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.lS6 00:12:08.513 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.wCF ]] 00:12:08.513 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wCF 00:12:08.513 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.513 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.771 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.771 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wCF 00:12:08.771 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wCF 00:12:09.030 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:09.030 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.P3L 00:12:09.030 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.030 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.030 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.030 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.P3L 00:12:09.030 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.P3L 00:12:09.289 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.dW9 ]] 00:12:09.289 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dW9 00:12:09.289 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.289 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.289 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.289 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dW9 00:12:09.289 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dW9 00:12:09.548 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:09.548 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FkJ 00:12:09.548 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.548 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.548 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.548 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FkJ 00:12:09.548 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FkJ 00:12:09.806 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.8BS ]] 00:12:09.806 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8BS 00:12:09.806 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.806 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.806 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.806 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8BS 00:12:09.806 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8BS 00:12:10.063 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:10.063 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rhy 00:12:10.063 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.063 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.063 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.063 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Rhy 00:12:10.063 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Rhy 00:12:10.320 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:10.320 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:10.320 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.320 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.320 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.320 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.581 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.149 00:12:11.149 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.149 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.149 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.408 { 00:12:11.408 "auth": { 00:12:11.408 "dhgroup": "null", 00:12:11.408 "digest": "sha256", 00:12:11.408 "state": "completed" 00:12:11.408 }, 00:12:11.408 "cntlid": 1, 00:12:11.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:11.408 "listen_address": { 00:12:11.408 "adrfam": "IPv4", 00:12:11.408 "traddr": "10.0.0.3", 00:12:11.408 "trsvcid": "4420", 00:12:11.408 "trtype": "TCP" 00:12:11.408 }, 00:12:11.408 "peer_address": { 00:12:11.408 "adrfam": "IPv4", 00:12:11.408 "traddr": "10.0.0.1", 00:12:11.408 "trsvcid": "49586", 00:12:11.408 "trtype": "TCP" 00:12:11.408 }, 00:12:11.408 "qid": 0, 00:12:11.408 "state": "enabled", 00:12:11.408 "thread": "nvmf_tgt_poll_group_000" 00:12:11.408 } 00:12:11.408 ]' 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.408 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.048 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:12.049 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:16.236 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.495 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.062 00:12:17.062 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.062 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.062 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.320 { 00:12:17.320 "auth": { 00:12:17.320 "dhgroup": "null", 00:12:17.320 "digest": "sha256", 00:12:17.320 "state": "completed" 00:12:17.320 }, 00:12:17.320 "cntlid": 3, 00:12:17.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:17.320 "listen_address": { 00:12:17.320 "adrfam": "IPv4", 00:12:17.320 "traddr": "10.0.0.3", 00:12:17.320 "trsvcid": "4420", 00:12:17.320 "trtype": "TCP" 00:12:17.320 }, 00:12:17.320 "peer_address": { 00:12:17.320 "adrfam": "IPv4", 00:12:17.320 "traddr": "10.0.0.1", 00:12:17.320 "trsvcid": "38768", 00:12:17.320 "trtype": "TCP" 00:12:17.320 }, 00:12:17.320 "qid": 0, 00:12:17.320 "state": "enabled", 00:12:17.320 "thread": "nvmf_tgt_poll_group_000" 00:12:17.320 } 00:12:17.320 ]' 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.320 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.890 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:17.890 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:18.459 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.718 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.285 00:12:19.285 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.285 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.285 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.544 { 00:12:19.544 "auth": { 00:12:19.544 "dhgroup": "null", 00:12:19.544 "digest": "sha256", 00:12:19.544 "state": "completed" 00:12:19.544 }, 00:12:19.544 "cntlid": 5, 00:12:19.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:19.544 "listen_address": { 00:12:19.544 "adrfam": "IPv4", 00:12:19.544 "traddr": "10.0.0.3", 00:12:19.544 "trsvcid": "4420", 00:12:19.544 "trtype": "TCP" 00:12:19.544 }, 00:12:19.544 "peer_address": { 00:12:19.544 "adrfam": "IPv4", 00:12:19.544 "traddr": "10.0.0.1", 00:12:19.544 "trsvcid": "38784", 00:12:19.544 "trtype": "TCP" 00:12:19.544 }, 00:12:19.544 "qid": 0, 00:12:19.544 "state": "enabled", 00:12:19.544 "thread": "nvmf_tgt_poll_group_000" 00:12:19.544 } 00:12:19.544 ]' 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.544 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.803 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:19.803 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.738 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.997 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.255 00:12:21.255 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.255 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.255 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.822 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.822 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.822 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.822 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.822 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.822 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.822 { 00:12:21.822 "auth": { 00:12:21.822 "dhgroup": "null", 00:12:21.822 "digest": "sha256", 00:12:21.822 "state": "completed" 00:12:21.822 }, 00:12:21.822 "cntlid": 7, 00:12:21.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:21.822 "listen_address": { 00:12:21.822 "adrfam": "IPv4", 00:12:21.822 "traddr": "10.0.0.3", 00:12:21.822 "trsvcid": "4420", 00:12:21.822 "trtype": "TCP" 00:12:21.822 }, 00:12:21.822 "peer_address": { 00:12:21.822 "adrfam": "IPv4", 00:12:21.822 "traddr": "10.0.0.1", 00:12:21.822 "trsvcid": "38818", 00:12:21.822 "trtype": "TCP" 00:12:21.822 }, 00:12:21.822 "qid": 0, 00:12:21.822 "state": "enabled", 00:12:21.822 "thread": "nvmf_tgt_poll_group_000" 00:12:21.822 } 00:12:21.822 ]' 00:12:21.822 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.822 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.822 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.822 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:21.822 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.822 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.822 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.822 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.081 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:22.081 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:23.016 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.275 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.534 00:12:23.534 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.534 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.534 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.792 { 00:12:23.792 "auth": { 00:12:23.792 "dhgroup": "ffdhe2048", 00:12:23.792 "digest": "sha256", 00:12:23.792 "state": "completed" 00:12:23.792 }, 00:12:23.792 "cntlid": 9, 00:12:23.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:23.792 "listen_address": { 00:12:23.792 "adrfam": "IPv4", 00:12:23.792 "traddr": "10.0.0.3", 00:12:23.792 "trsvcid": "4420", 00:12:23.792 "trtype": "TCP" 00:12:23.792 }, 00:12:23.792 "peer_address": { 00:12:23.792 "adrfam": "IPv4", 00:12:23.792 "traddr": "10.0.0.1", 00:12:23.792 "trsvcid": "48382", 00:12:23.792 "trtype": "TCP" 00:12:23.792 }, 00:12:23.792 "qid": 0, 00:12:23.792 "state": "enabled", 00:12:23.792 "thread": "nvmf_tgt_poll_group_000" 00:12:23.792 } 00:12:23.792 ]' 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.792 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.051 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:24.051 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.051 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.051 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.051 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.310 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:24.310 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.245 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.812 00:12:25.812 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.812 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.812 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.070 { 00:12:26.070 "auth": { 00:12:26.070 "dhgroup": "ffdhe2048", 00:12:26.070 "digest": "sha256", 00:12:26.070 "state": "completed" 00:12:26.070 }, 00:12:26.070 "cntlid": 11, 00:12:26.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:26.070 "listen_address": { 00:12:26.070 "adrfam": "IPv4", 00:12:26.070 "traddr": "10.0.0.3", 00:12:26.070 "trsvcid": "4420", 00:12:26.070 "trtype": "TCP" 00:12:26.070 }, 00:12:26.070 "peer_address": { 00:12:26.070 "adrfam": "IPv4", 00:12:26.070 "traddr": "10.0.0.1", 00:12:26.070 "trsvcid": "48398", 00:12:26.070 "trtype": "TCP" 00:12:26.070 }, 00:12:26.070 "qid": 0, 00:12:26.070 "state": "enabled", 00:12:26.070 "thread": "nvmf_tgt_poll_group_000" 00:12:26.070 } 00:12:26.070 ]' 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.070 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.329 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:26.329 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:27.265 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.524 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.107 00:12:28.107 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.107 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.107 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.378 { 00:12:28.378 "auth": { 00:12:28.378 "dhgroup": "ffdhe2048", 00:12:28.378 "digest": "sha256", 00:12:28.378 "state": "completed" 00:12:28.378 }, 00:12:28.378 "cntlid": 13, 00:12:28.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:28.378 "listen_address": { 00:12:28.378 "adrfam": "IPv4", 00:12:28.378 "traddr": "10.0.0.3", 00:12:28.378 "trsvcid": "4420", 00:12:28.378 "trtype": "TCP" 00:12:28.378 }, 00:12:28.378 "peer_address": { 00:12:28.378 "adrfam": "IPv4", 00:12:28.378 "traddr": "10.0.0.1", 00:12:28.378 "trsvcid": "48422", 00:12:28.378 "trtype": "TCP" 00:12:28.378 }, 00:12:28.378 "qid": 0, 00:12:28.378 "state": "enabled", 00:12:28.378 "thread": "nvmf_tgt_poll_group_000" 00:12:28.378 } 00:12:28.378 ]' 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.378 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.636 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:28.637 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:29.571 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.572 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.138 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.138 { 00:12:30.138 "auth": { 00:12:30.138 "dhgroup": "ffdhe2048", 00:12:30.138 "digest": "sha256", 00:12:30.138 "state": "completed" 00:12:30.138 }, 00:12:30.138 "cntlid": 15, 00:12:30.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:30.138 "listen_address": { 00:12:30.138 "adrfam": "IPv4", 00:12:30.138 "traddr": "10.0.0.3", 00:12:30.138 "trsvcid": "4420", 00:12:30.138 "trtype": "TCP" 00:12:30.138 }, 00:12:30.138 "peer_address": { 00:12:30.138 "adrfam": "IPv4", 00:12:30.138 "traddr": "10.0.0.1", 00:12:30.138 "trsvcid": "48450", 00:12:30.138 "trtype": "TCP" 00:12:30.138 }, 00:12:30.138 "qid": 0, 00:12:30.138 "state": "enabled", 00:12:30.138 "thread": "nvmf_tgt_poll_group_000" 00:12:30.138 } 00:12:30.138 ]' 00:12:30.138 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.397 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.397 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.397 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:30.397 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.397 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.397 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.397 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.656 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:30.656 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:31.591 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.851 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.109 00:12:32.109 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.109 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.109 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.368 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.368 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.368 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.368 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.368 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.368 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.368 { 00:12:32.368 "auth": { 00:12:32.368 "dhgroup": "ffdhe3072", 00:12:32.368 "digest": "sha256", 00:12:32.368 "state": "completed" 00:12:32.368 }, 00:12:32.368 "cntlid": 17, 00:12:32.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:32.368 "listen_address": { 00:12:32.368 "adrfam": "IPv4", 00:12:32.368 "traddr": "10.0.0.3", 00:12:32.368 "trsvcid": "4420", 00:12:32.368 "trtype": "TCP" 00:12:32.368 }, 00:12:32.368 "peer_address": { 00:12:32.368 "adrfam": "IPv4", 00:12:32.368 "traddr": "10.0.0.1", 00:12:32.368 "trsvcid": "48484", 00:12:32.368 "trtype": "TCP" 00:12:32.368 }, 00:12:32.368 "qid": 0, 00:12:32.368 "state": "enabled", 00:12:32.368 "thread": "nvmf_tgt_poll_group_000" 00:12:32.368 } 00:12:32.368 ]' 00:12:32.368 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.626 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.626 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.626 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:32.626 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.626 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.626 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.626 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.934 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:32.934 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:33.500 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.759 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.324 00:12:34.324 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.324 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.324 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.582 { 00:12:34.582 "auth": { 00:12:34.582 "dhgroup": "ffdhe3072", 00:12:34.582 "digest": "sha256", 00:12:34.582 "state": "completed" 00:12:34.582 }, 00:12:34.582 "cntlid": 19, 00:12:34.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:34.582 "listen_address": { 00:12:34.582 "adrfam": "IPv4", 00:12:34.582 "traddr": "10.0.0.3", 00:12:34.582 "trsvcid": "4420", 00:12:34.582 "trtype": "TCP" 00:12:34.582 }, 00:12:34.582 "peer_address": { 00:12:34.582 "adrfam": "IPv4", 00:12:34.582 "traddr": "10.0.0.1", 00:12:34.582 "trsvcid": "55818", 00:12:34.582 "trtype": "TCP" 00:12:34.582 }, 00:12:34.582 "qid": 0, 00:12:34.582 "state": "enabled", 00:12:34.582 "thread": "nvmf_tgt_poll_group_000" 00:12:34.582 } 00:12:34.582 ]' 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.582 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.841 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:34.841 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:35.776 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.035 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.293 00:12:36.294 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.294 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.294 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.552 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.552 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.552 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.552 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.552 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.552 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.552 { 00:12:36.552 "auth": { 00:12:36.552 "dhgroup": "ffdhe3072", 00:12:36.552 "digest": "sha256", 00:12:36.552 "state": "completed" 00:12:36.552 }, 00:12:36.552 "cntlid": 21, 00:12:36.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:36.552 "listen_address": { 00:12:36.552 "adrfam": "IPv4", 00:12:36.552 "traddr": "10.0.0.3", 00:12:36.552 "trsvcid": "4420", 00:12:36.552 "trtype": "TCP" 00:12:36.552 }, 00:12:36.552 "peer_address": { 00:12:36.552 "adrfam": "IPv4", 00:12:36.552 "traddr": "10.0.0.1", 00:12:36.552 "trsvcid": "55864", 00:12:36.552 "trtype": "TCP" 00:12:36.552 }, 00:12:36.552 "qid": 0, 00:12:36.552 "state": "enabled", 00:12:36.552 "thread": "nvmf_tgt_poll_group_000" 00:12:36.552 } 00:12:36.552 ]' 00:12:36.552 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.811 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.811 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.811 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:36.811 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.811 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.811 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.811 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.069 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:37.069 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.004 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.262 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.262 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.262 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.262 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.521 00:12:38.521 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.521 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.521 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.780 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.780 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.780 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.780 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.780 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.780 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.780 { 00:12:38.780 "auth": { 00:12:38.780 "dhgroup": "ffdhe3072", 00:12:38.780 "digest": "sha256", 00:12:38.780 "state": "completed" 00:12:38.780 }, 00:12:38.780 "cntlid": 23, 00:12:38.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:38.780 "listen_address": { 00:12:38.780 "adrfam": "IPv4", 00:12:38.780 "traddr": "10.0.0.3", 00:12:38.780 "trsvcid": "4420", 00:12:38.780 "trtype": "TCP" 00:12:38.780 }, 00:12:38.780 "peer_address": { 00:12:38.780 "adrfam": "IPv4", 00:12:38.780 "traddr": "10.0.0.1", 00:12:38.780 "trsvcid": "55884", 00:12:38.780 "trtype": "TCP" 00:12:38.780 }, 00:12:38.780 "qid": 0, 00:12:38.780 "state": "enabled", 00:12:38.780 "thread": "nvmf_tgt_poll_group_000" 00:12:38.780 } 00:12:38.780 ]' 00:12:38.780 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.038 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.038 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.038 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:39.038 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.038 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.038 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.038 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.297 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:39.297 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.231 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.798 00:12:40.798 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.798 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.798 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.056 { 00:12:41.056 "auth": { 00:12:41.056 "dhgroup": "ffdhe4096", 00:12:41.056 "digest": "sha256", 00:12:41.056 "state": "completed" 00:12:41.056 }, 00:12:41.056 "cntlid": 25, 00:12:41.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:41.056 "listen_address": { 00:12:41.056 "adrfam": "IPv4", 00:12:41.056 "traddr": "10.0.0.3", 00:12:41.056 "trsvcid": "4420", 00:12:41.056 "trtype": "TCP" 00:12:41.056 }, 00:12:41.056 "peer_address": { 00:12:41.056 "adrfam": "IPv4", 00:12:41.056 "traddr": "10.0.0.1", 00:12:41.056 "trsvcid": "55918", 00:12:41.056 "trtype": "TCP" 00:12:41.056 }, 00:12:41.056 "qid": 0, 00:12:41.056 "state": "enabled", 00:12:41.056 "thread": "nvmf_tgt_poll_group_000" 00:12:41.056 } 00:12:41.056 ]' 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.056 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.314 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:41.314 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.314 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.314 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.314 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.572 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:41.572 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.508 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.074 00:12:43.074 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.074 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.074 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.332 { 00:12:43.332 "auth": { 00:12:43.332 "dhgroup": "ffdhe4096", 00:12:43.332 "digest": "sha256", 00:12:43.332 "state": "completed" 00:12:43.332 }, 00:12:43.332 "cntlid": 27, 00:12:43.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:43.332 "listen_address": { 00:12:43.332 "adrfam": "IPv4", 00:12:43.332 "traddr": "10.0.0.3", 00:12:43.332 "trsvcid": "4420", 00:12:43.332 "trtype": "TCP" 00:12:43.332 }, 00:12:43.332 "peer_address": { 00:12:43.332 "adrfam": "IPv4", 00:12:43.332 "traddr": "10.0.0.1", 00:12:43.332 "trsvcid": "58962", 00:12:43.332 "trtype": "TCP" 00:12:43.332 }, 00:12:43.332 "qid": 0, 00:12:43.332 "state": "enabled", 00:12:43.332 "thread": "nvmf_tgt_poll_group_000" 00:12:43.332 } 00:12:43.332 ]' 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.332 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.591 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:43.591 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.591 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.591 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.591 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.849 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:43.849 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:44.415 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.674 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.241 00:12:45.241 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.241 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.241 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.500 { 00:12:45.500 "auth": { 00:12:45.500 "dhgroup": "ffdhe4096", 00:12:45.500 "digest": "sha256", 00:12:45.500 "state": "completed" 00:12:45.500 }, 00:12:45.500 "cntlid": 29, 00:12:45.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:45.500 "listen_address": { 00:12:45.500 "adrfam": "IPv4", 00:12:45.500 "traddr": "10.0.0.3", 00:12:45.500 "trsvcid": "4420", 00:12:45.500 "trtype": "TCP" 00:12:45.500 }, 00:12:45.500 "peer_address": { 00:12:45.500 "adrfam": "IPv4", 00:12:45.500 "traddr": "10.0.0.1", 00:12:45.500 "trsvcid": "58992", 00:12:45.500 "trtype": "TCP" 00:12:45.500 }, 00:12:45.500 "qid": 0, 00:12:45.500 "state": "enabled", 00:12:45.500 "thread": "nvmf_tgt_poll_group_000" 00:12:45.500 } 00:12:45.500 ]' 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:45.500 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.759 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.759 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.759 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.018 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:46.018 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:46.585 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.844 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.411 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.411 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.411 { 00:12:47.411 "auth": { 00:12:47.411 "dhgroup": "ffdhe4096", 00:12:47.411 "digest": "sha256", 00:12:47.411 "state": "completed" 00:12:47.411 }, 00:12:47.411 "cntlid": 31, 00:12:47.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:47.411 "listen_address": { 00:12:47.411 "adrfam": "IPv4", 00:12:47.411 "traddr": "10.0.0.3", 00:12:47.411 "trsvcid": "4420", 00:12:47.411 "trtype": "TCP" 00:12:47.411 }, 00:12:47.411 "peer_address": { 00:12:47.411 "adrfam": "IPv4", 00:12:47.411 "traddr": "10.0.0.1", 00:12:47.411 "trsvcid": "59012", 00:12:47.411 "trtype": "TCP" 00:12:47.411 }, 00:12:47.411 "qid": 0, 00:12:47.411 "state": "enabled", 00:12:47.411 "thread": "nvmf_tgt_poll_group_000" 00:12:47.411 } 00:12:47.411 ]' 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.670 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.929 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:47.929 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:48.917 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.917 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.484 00:12:49.484 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.484 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.484 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.743 { 00:12:49.743 "auth": { 00:12:49.743 "dhgroup": "ffdhe6144", 00:12:49.743 "digest": "sha256", 00:12:49.743 "state": "completed" 00:12:49.743 }, 00:12:49.743 "cntlid": 33, 00:12:49.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:49.743 "listen_address": { 00:12:49.743 "adrfam": "IPv4", 00:12:49.743 "traddr": "10.0.0.3", 00:12:49.743 "trsvcid": "4420", 00:12:49.743 "trtype": "TCP" 00:12:49.743 }, 00:12:49.743 "peer_address": { 00:12:49.743 "adrfam": "IPv4", 00:12:49.743 "traddr": "10.0.0.1", 00:12:49.743 "trsvcid": "59038", 00:12:49.743 "trtype": "TCP" 00:12:49.743 }, 00:12:49.743 "qid": 0, 00:12:49.743 "state": "enabled", 00:12:49.743 "thread": "nvmf_tgt_poll_group_000" 00:12:49.743 } 00:12:49.743 ]' 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.743 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.308 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:50.308 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:50.874 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.132 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.390 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.390 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.390 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.648 00:12:51.906 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.906 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.906 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.164 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.164 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.164 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.164 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.164 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.164 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.164 { 00:12:52.164 "auth": { 00:12:52.164 "dhgroup": "ffdhe6144", 00:12:52.164 "digest": "sha256", 00:12:52.164 "state": "completed" 00:12:52.164 }, 00:12:52.164 "cntlid": 35, 00:12:52.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:52.164 "listen_address": { 00:12:52.164 "adrfam": "IPv4", 00:12:52.164 "traddr": "10.0.0.3", 00:12:52.164 "trsvcid": "4420", 00:12:52.164 "trtype": "TCP" 00:12:52.164 }, 00:12:52.164 "peer_address": { 00:12:52.164 "adrfam": "IPv4", 00:12:52.164 "traddr": "10.0.0.1", 00:12:52.164 "trsvcid": "59068", 00:12:52.164 "trtype": "TCP" 00:12:52.164 }, 00:12:52.164 "qid": 0, 00:12:52.164 "state": "enabled", 00:12:52.164 "thread": "nvmf_tgt_poll_group_000" 00:12:52.164 } 00:12:52.164 ]' 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.165 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.730 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:52.730 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:53.295 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.553 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.117 00:12:54.117 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.117 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.117 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.376 { 00:12:54.376 "auth": { 00:12:54.376 "dhgroup": "ffdhe6144", 00:12:54.376 "digest": "sha256", 00:12:54.376 "state": "completed" 00:12:54.376 }, 00:12:54.376 "cntlid": 37, 00:12:54.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:54.376 "listen_address": { 00:12:54.376 "adrfam": "IPv4", 00:12:54.376 "traddr": "10.0.0.3", 00:12:54.376 "trsvcid": "4420", 00:12:54.376 "trtype": "TCP" 00:12:54.376 }, 00:12:54.376 "peer_address": { 00:12:54.376 "adrfam": "IPv4", 00:12:54.376 "traddr": "10.0.0.1", 00:12:54.376 "trsvcid": "40262", 00:12:54.376 "trtype": "TCP" 00:12:54.376 }, 00:12:54.376 "qid": 0, 00:12:54.376 "state": "enabled", 00:12:54.376 "thread": "nvmf_tgt_poll_group_000" 00:12:54.376 } 00:12:54.376 ]' 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.376 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.634 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:54.634 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:55.567 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:55.824 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:55.824 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.824 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.824 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:55.824 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.825 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:56.391 00:12:56.391 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.391 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.391 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.650 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.650 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.650 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.650 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.650 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.650 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.650 { 00:12:56.650 "auth": { 00:12:56.650 "dhgroup": "ffdhe6144", 00:12:56.650 "digest": "sha256", 00:12:56.650 "state": "completed" 00:12:56.650 }, 00:12:56.650 "cntlid": 39, 00:12:56.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:56.650 "listen_address": { 00:12:56.650 "adrfam": "IPv4", 00:12:56.650 "traddr": "10.0.0.3", 00:12:56.650 "trsvcid": "4420", 00:12:56.650 "trtype": "TCP" 00:12:56.650 }, 00:12:56.650 "peer_address": { 00:12:56.650 "adrfam": "IPv4", 00:12:56.650 "traddr": "10.0.0.1", 00:12:56.650 "trsvcid": "40280", 00:12:56.650 "trtype": "TCP" 00:12:56.650 }, 00:12:56.650 "qid": 0, 00:12:56.650 "state": "enabled", 00:12:56.650 "thread": "nvmf_tgt_poll_group_000" 00:12:56.650 } 00:12:56.650 ]' 00:12:56.650 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.650 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.650 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.909 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:56.909 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.909 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.909 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.909 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.168 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:57.168 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:12:57.734 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.734 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:12:57.734 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.734 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.993 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.993 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:57.993 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.993 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:57.993 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:58.250 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:58.250 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.250 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.250 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:58.250 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.816 00:12:59.074 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.074 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.074 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.332 { 00:12:59.332 "auth": { 00:12:59.332 "dhgroup": "ffdhe8192", 00:12:59.332 "digest": "sha256", 00:12:59.332 "state": "completed" 00:12:59.332 }, 00:12:59.332 "cntlid": 41, 00:12:59.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:12:59.332 "listen_address": { 00:12:59.332 "adrfam": "IPv4", 00:12:59.332 "traddr": "10.0.0.3", 00:12:59.332 "trsvcid": "4420", 00:12:59.332 "trtype": "TCP" 00:12:59.332 }, 00:12:59.332 "peer_address": { 00:12:59.332 "adrfam": "IPv4", 00:12:59.332 "traddr": "10.0.0.1", 00:12:59.332 "trsvcid": "40304", 00:12:59.332 "trtype": "TCP" 00:12:59.332 }, 00:12:59.332 "qid": 0, 00:12:59.332 "state": "enabled", 00:12:59.332 "thread": "nvmf_tgt_poll_group_000" 00:12:59.332 } 00:12:59.332 ]' 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.332 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.898 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:12:59.898 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:00.465 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.030 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.596 00:13:01.596 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.596 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.596 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.872 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.872 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.872 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.872 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.872 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.872 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.872 { 00:13:01.872 "auth": { 00:13:01.872 "dhgroup": "ffdhe8192", 00:13:01.872 "digest": "sha256", 00:13:01.872 "state": "completed" 00:13:01.872 }, 00:13:01.872 "cntlid": 43, 00:13:01.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:01.872 "listen_address": { 00:13:01.872 "adrfam": "IPv4", 00:13:01.872 "traddr": "10.0.0.3", 00:13:01.872 "trsvcid": "4420", 00:13:01.872 "trtype": "TCP" 00:13:01.872 }, 00:13:01.872 "peer_address": { 00:13:01.872 "adrfam": "IPv4", 00:13:01.872 "traddr": "10.0.0.1", 00:13:01.872 "trsvcid": "40344", 00:13:01.872 "trtype": "TCP" 00:13:01.872 }, 00:13:01.872 "qid": 0, 00:13:01.872 "state": "enabled", 00:13:01.873 "thread": "nvmf_tgt_poll_group_000" 00:13:01.873 } 00:13:01.873 ]' 00:13:01.873 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.873 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.873 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.873 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.873 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.141 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.141 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.141 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.399 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:02.399 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:02.965 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.531 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.098 00:13:04.098 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.098 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.098 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.356 { 00:13:04.356 "auth": { 00:13:04.356 "dhgroup": "ffdhe8192", 00:13:04.356 "digest": "sha256", 00:13:04.356 "state": "completed" 00:13:04.356 }, 00:13:04.356 "cntlid": 45, 00:13:04.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:04.356 "listen_address": { 00:13:04.356 "adrfam": "IPv4", 00:13:04.356 "traddr": "10.0.0.3", 00:13:04.356 "trsvcid": "4420", 00:13:04.356 "trtype": "TCP" 00:13:04.356 }, 00:13:04.356 "peer_address": { 00:13:04.356 "adrfam": "IPv4", 00:13:04.356 "traddr": "10.0.0.1", 00:13:04.356 "trsvcid": "45898", 00:13:04.356 "trtype": "TCP" 00:13:04.356 }, 00:13:04.356 "qid": 0, 00:13:04.356 "state": "enabled", 00:13:04.356 "thread": "nvmf_tgt_poll_group_000" 00:13:04.356 } 00:13:04.356 ]' 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:04.356 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.614 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.614 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.614 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.872 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:04.872 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:05.437 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.437 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:05.437 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.438 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.438 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.438 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.438 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:05.438 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:05.695 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:05.695 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.695 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:05.695 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.696 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.630 00:13:06.630 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.630 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.630 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.630 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.630 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.630 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.630 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.630 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.630 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.630 { 00:13:06.630 "auth": { 00:13:06.630 "dhgroup": "ffdhe8192", 00:13:06.630 "digest": "sha256", 00:13:06.630 "state": "completed" 00:13:06.630 }, 00:13:06.630 "cntlid": 47, 00:13:06.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:06.630 "listen_address": { 00:13:06.630 "adrfam": "IPv4", 00:13:06.630 "traddr": "10.0.0.3", 00:13:06.630 "trsvcid": "4420", 00:13:06.630 "trtype": "TCP" 00:13:06.630 }, 00:13:06.630 "peer_address": { 00:13:06.630 "adrfam": "IPv4", 00:13:06.630 "traddr": "10.0.0.1", 00:13:06.630 "trsvcid": "45926", 00:13:06.630 "trtype": "TCP" 00:13:06.630 }, 00:13:06.630 "qid": 0, 00:13:06.630 "state": "enabled", 00:13:06.630 "thread": "nvmf_tgt_poll_group_000" 00:13:06.630 } 00:13:06.630 ]' 00:13:06.630 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.888 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.888 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.888 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:06.888 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.888 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.888 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.888 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.146 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:07.146 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:08.080 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.339 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.597 00:13:08.597 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.597 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.597 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.855 { 00:13:08.855 "auth": { 00:13:08.855 "dhgroup": "null", 00:13:08.855 "digest": "sha384", 00:13:08.855 "state": "completed" 00:13:08.855 }, 00:13:08.855 "cntlid": 49, 00:13:08.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:08.855 "listen_address": { 00:13:08.855 "adrfam": "IPv4", 00:13:08.855 "traddr": "10.0.0.3", 00:13:08.855 "trsvcid": "4420", 00:13:08.855 "trtype": "TCP" 00:13:08.855 }, 00:13:08.855 "peer_address": { 00:13:08.855 "adrfam": "IPv4", 00:13:08.855 "traddr": "10.0.0.1", 00:13:08.855 "trsvcid": "45962", 00:13:08.855 "trtype": "TCP" 00:13:08.855 }, 00:13:08.855 "qid": 0, 00:13:08.855 "state": "enabled", 00:13:08.855 "thread": "nvmf_tgt_poll_group_000" 00:13:08.855 } 00:13:08.855 ]' 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.855 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.113 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:09.113 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.113 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.113 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.113 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.371 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:09.371 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.308 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.876 00:13:10.876 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.876 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.876 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.134 { 00:13:11.134 "auth": { 00:13:11.134 "dhgroup": "null", 00:13:11.134 "digest": "sha384", 00:13:11.134 "state": "completed" 00:13:11.134 }, 00:13:11.134 "cntlid": 51, 00:13:11.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:11.134 "listen_address": { 00:13:11.134 "adrfam": "IPv4", 00:13:11.134 "traddr": "10.0.0.3", 00:13:11.134 "trsvcid": "4420", 00:13:11.134 "trtype": "TCP" 00:13:11.134 }, 00:13:11.134 "peer_address": { 00:13:11.134 "adrfam": "IPv4", 00:13:11.134 "traddr": "10.0.0.1", 00:13:11.134 "trsvcid": "45978", 00:13:11.134 "trtype": "TCP" 00:13:11.134 }, 00:13:11.134 "qid": 0, 00:13:11.134 "state": "enabled", 00:13:11.134 "thread": "nvmf_tgt_poll_group_000" 00:13:11.134 } 00:13:11.134 ]' 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.134 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.393 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:11.393 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:12.006 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.573 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.832 00:13:12.832 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.832 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.832 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.090 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.090 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.090 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.090 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.090 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.090 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.090 { 00:13:13.090 "auth": { 00:13:13.090 "dhgroup": "null", 00:13:13.090 "digest": "sha384", 00:13:13.090 "state": "completed" 00:13:13.090 }, 00:13:13.090 "cntlid": 53, 00:13:13.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:13.090 "listen_address": { 00:13:13.090 "adrfam": "IPv4", 00:13:13.090 "traddr": "10.0.0.3", 00:13:13.090 "trsvcid": "4420", 00:13:13.090 "trtype": "TCP" 00:13:13.090 }, 00:13:13.090 "peer_address": { 00:13:13.090 "adrfam": "IPv4", 00:13:13.090 "traddr": "10.0.0.1", 00:13:13.090 "trsvcid": "39166", 00:13:13.090 "trtype": "TCP" 00:13:13.090 }, 00:13:13.090 "qid": 0, 00:13:13.090 "state": "enabled", 00:13:13.090 "thread": "nvmf_tgt_poll_group_000" 00:13:13.090 } 00:13:13.090 ]' 00:13:13.090 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.091 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.091 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.091 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:13.091 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.349 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.349 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.349 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.608 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:13.608 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.175 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.433 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.434 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.999 00:13:14.999 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.000 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.000 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.257 { 00:13:15.257 "auth": { 00:13:15.257 "dhgroup": "null", 00:13:15.257 "digest": "sha384", 00:13:15.257 "state": "completed" 00:13:15.257 }, 00:13:15.257 "cntlid": 55, 00:13:15.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:15.257 "listen_address": { 00:13:15.257 "adrfam": "IPv4", 00:13:15.257 "traddr": "10.0.0.3", 00:13:15.257 "trsvcid": "4420", 00:13:15.257 "trtype": "TCP" 00:13:15.257 }, 00:13:15.257 "peer_address": { 00:13:15.257 "adrfam": "IPv4", 00:13:15.257 "traddr": "10.0.0.1", 00:13:15.257 "trsvcid": "39184", 00:13:15.257 "trtype": "TCP" 00:13:15.257 }, 00:13:15.257 "qid": 0, 00:13:15.257 "state": "enabled", 00:13:15.257 "thread": "nvmf_tgt_poll_group_000" 00:13:15.257 } 00:13:15.257 ]' 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:15.257 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.515 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.515 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.515 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.774 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:15.774 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:16.340 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.598 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.163 00:13:17.163 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.163 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.163 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.421 { 00:13:17.421 "auth": { 00:13:17.421 "dhgroup": "ffdhe2048", 00:13:17.421 "digest": "sha384", 00:13:17.421 "state": "completed" 00:13:17.421 }, 00:13:17.421 "cntlid": 57, 00:13:17.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:17.421 "listen_address": { 00:13:17.421 "adrfam": "IPv4", 00:13:17.421 "traddr": "10.0.0.3", 00:13:17.421 "trsvcid": "4420", 00:13:17.421 "trtype": "TCP" 00:13:17.421 }, 00:13:17.421 "peer_address": { 00:13:17.421 "adrfam": "IPv4", 00:13:17.421 "traddr": "10.0.0.1", 00:13:17.421 "trsvcid": "39216", 00:13:17.421 "trtype": "TCP" 00:13:17.421 }, 00:13:17.421 "qid": 0, 00:13:17.421 "state": "enabled", 00:13:17.421 "thread": "nvmf_tgt_poll_group_000" 00:13:17.421 } 00:13:17.421 ]' 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.421 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.679 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:17.679 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:18.614 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.873 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.131 00:13:19.131 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.131 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.131 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.389 { 00:13:19.389 "auth": { 00:13:19.389 "dhgroup": "ffdhe2048", 00:13:19.389 "digest": "sha384", 00:13:19.389 "state": "completed" 00:13:19.389 }, 00:13:19.389 "cntlid": 59, 00:13:19.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:19.389 "listen_address": { 00:13:19.389 "adrfam": "IPv4", 00:13:19.389 "traddr": "10.0.0.3", 00:13:19.389 "trsvcid": "4420", 00:13:19.389 "trtype": "TCP" 00:13:19.389 }, 00:13:19.389 "peer_address": { 00:13:19.389 "adrfam": "IPv4", 00:13:19.389 "traddr": "10.0.0.1", 00:13:19.389 "trsvcid": "39256", 00:13:19.389 "trtype": "TCP" 00:13:19.389 }, 00:13:19.389 "qid": 0, 00:13:19.389 "state": "enabled", 00:13:19.389 "thread": "nvmf_tgt_poll_group_000" 00:13:19.389 } 00:13:19.389 ]' 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.389 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.647 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:19.647 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.647 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.647 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.648 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.951 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:19.951 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:20.516 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.516 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:20.516 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.516 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.775 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.775 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.775 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:20.775 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.033 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.292 00:13:21.292 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.292 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.292 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.858 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.858 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.858 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.858 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.858 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.858 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.858 { 00:13:21.858 "auth": { 00:13:21.858 "dhgroup": "ffdhe2048", 00:13:21.858 "digest": "sha384", 00:13:21.858 "state": "completed" 00:13:21.858 }, 00:13:21.858 "cntlid": 61, 00:13:21.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:21.858 "listen_address": { 00:13:21.858 "adrfam": "IPv4", 00:13:21.858 "traddr": "10.0.0.3", 00:13:21.858 "trsvcid": "4420", 00:13:21.858 "trtype": "TCP" 00:13:21.858 }, 00:13:21.858 "peer_address": { 00:13:21.858 "adrfam": "IPv4", 00:13:21.858 "traddr": "10.0.0.1", 00:13:21.858 "trsvcid": "39292", 00:13:21.858 "trtype": "TCP" 00:13:21.858 }, 00:13:21.858 "qid": 0, 00:13:21.858 "state": "enabled", 00:13:21.858 "thread": "nvmf_tgt_poll_group_000" 00:13:21.858 } 00:13:21.858 ]' 00:13:21.858 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.858 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.858 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.858 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.858 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.858 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.858 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.858 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.116 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:22.116 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:23.050 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.309 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.567 00:13:23.567 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.567 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.567 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.825 { 00:13:23.825 "auth": { 00:13:23.825 "dhgroup": "ffdhe2048", 00:13:23.825 "digest": "sha384", 00:13:23.825 "state": "completed" 00:13:23.825 }, 00:13:23.825 "cntlid": 63, 00:13:23.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:23.825 "listen_address": { 00:13:23.825 "adrfam": "IPv4", 00:13:23.825 "traddr": "10.0.0.3", 00:13:23.825 "trsvcid": "4420", 00:13:23.825 "trtype": "TCP" 00:13:23.825 }, 00:13:23.825 "peer_address": { 00:13:23.825 "adrfam": "IPv4", 00:13:23.825 "traddr": "10.0.0.1", 00:13:23.825 "trsvcid": "39970", 00:13:23.825 "trtype": "TCP" 00:13:23.825 }, 00:13:23.825 "qid": 0, 00:13:23.825 "state": "enabled", 00:13:23.825 "thread": "nvmf_tgt_poll_group_000" 00:13:23.825 } 00:13:23.825 ]' 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.825 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.082 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:24.082 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.082 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.082 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.082 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.343 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:24.343 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:24.914 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.172 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.737 00:13:25.737 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.737 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.737 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.995 { 00:13:25.995 "auth": { 00:13:25.995 "dhgroup": "ffdhe3072", 00:13:25.995 "digest": "sha384", 00:13:25.995 "state": "completed" 00:13:25.995 }, 00:13:25.995 "cntlid": 65, 00:13:25.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:25.995 "listen_address": { 00:13:25.995 "adrfam": "IPv4", 00:13:25.995 "traddr": "10.0.0.3", 00:13:25.995 "trsvcid": "4420", 00:13:25.995 "trtype": "TCP" 00:13:25.995 }, 00:13:25.995 "peer_address": { 00:13:25.995 "adrfam": "IPv4", 00:13:25.995 "traddr": "10.0.0.1", 00:13:25.995 "trsvcid": "39992", 00:13:25.995 "trtype": "TCP" 00:13:25.995 }, 00:13:25.995 "qid": 0, 00:13:25.995 "state": "enabled", 00:13:25.995 "thread": "nvmf_tgt_poll_group_000" 00:13:25.995 } 00:13:25.995 ]' 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.995 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.253 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:26.253 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:27.186 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.444 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.702 00:13:27.702 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.702 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.702 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.960 { 00:13:27.960 "auth": { 00:13:27.960 "dhgroup": "ffdhe3072", 00:13:27.960 "digest": "sha384", 00:13:27.960 "state": "completed" 00:13:27.960 }, 00:13:27.960 "cntlid": 67, 00:13:27.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:27.960 "listen_address": { 00:13:27.960 "adrfam": "IPv4", 00:13:27.960 "traddr": "10.0.0.3", 00:13:27.960 "trsvcid": "4420", 00:13:27.960 "trtype": "TCP" 00:13:27.960 }, 00:13:27.960 "peer_address": { 00:13:27.960 "adrfam": "IPv4", 00:13:27.960 "traddr": "10.0.0.1", 00:13:27.960 "trsvcid": "40014", 00:13:27.960 "trtype": "TCP" 00:13:27.960 }, 00:13:27.960 "qid": 0, 00:13:27.960 "state": "enabled", 00:13:27.960 "thread": "nvmf_tgt_poll_group_000" 00:13:27.960 } 00:13:27.960 ]' 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.960 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.217 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:28.217 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.217 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.217 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.217 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.475 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:28.475 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.075 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.333 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.897 00:13:29.897 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.897 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.897 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.154 { 00:13:30.154 "auth": { 00:13:30.154 "dhgroup": "ffdhe3072", 00:13:30.154 "digest": "sha384", 00:13:30.154 "state": "completed" 00:13:30.154 }, 00:13:30.154 "cntlid": 69, 00:13:30.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:30.154 "listen_address": { 00:13:30.154 "adrfam": "IPv4", 00:13:30.154 "traddr": "10.0.0.3", 00:13:30.154 "trsvcid": "4420", 00:13:30.154 "trtype": "TCP" 00:13:30.154 }, 00:13:30.154 "peer_address": { 00:13:30.154 "adrfam": "IPv4", 00:13:30.154 "traddr": "10.0.0.1", 00:13:30.154 "trsvcid": "40050", 00:13:30.154 "trtype": "TCP" 00:13:30.154 }, 00:13:30.154 "qid": 0, 00:13:30.154 "state": "enabled", 00:13:30.154 "thread": "nvmf_tgt_poll_group_000" 00:13:30.154 } 00:13:30.154 ]' 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.154 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.412 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:30.412 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.345 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.910 00:13:31.910 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.910 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.910 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.167 { 00:13:32.167 "auth": { 00:13:32.167 "dhgroup": "ffdhe3072", 00:13:32.167 "digest": "sha384", 00:13:32.167 "state": "completed" 00:13:32.167 }, 00:13:32.167 "cntlid": 71, 00:13:32.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:32.167 "listen_address": { 00:13:32.167 "adrfam": "IPv4", 00:13:32.167 "traddr": "10.0.0.3", 00:13:32.167 "trsvcid": "4420", 00:13:32.167 "trtype": "TCP" 00:13:32.167 }, 00:13:32.167 "peer_address": { 00:13:32.167 "adrfam": "IPv4", 00:13:32.167 "traddr": "10.0.0.1", 00:13:32.167 "trsvcid": "40086", 00:13:32.167 "trtype": "TCP" 00:13:32.167 }, 00:13:32.167 "qid": 0, 00:13:32.167 "state": "enabled", 00:13:32.167 "thread": "nvmf_tgt_poll_group_000" 00:13:32.167 } 00:13:32.167 ]' 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.167 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.425 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:32.425 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.358 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.929 00:13:33.929 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.929 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.929 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.204 { 00:13:34.204 "auth": { 00:13:34.204 "dhgroup": "ffdhe4096", 00:13:34.204 "digest": "sha384", 00:13:34.204 "state": "completed" 00:13:34.204 }, 00:13:34.204 "cntlid": 73, 00:13:34.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:34.204 "listen_address": { 00:13:34.204 "adrfam": "IPv4", 00:13:34.204 "traddr": "10.0.0.3", 00:13:34.204 "trsvcid": "4420", 00:13:34.204 "trtype": "TCP" 00:13:34.204 }, 00:13:34.204 "peer_address": { 00:13:34.204 "adrfam": "IPv4", 00:13:34.204 "traddr": "10.0.0.1", 00:13:34.204 "trsvcid": "43474", 00:13:34.204 "trtype": "TCP" 00:13:34.204 }, 00:13:34.204 "qid": 0, 00:13:34.204 "state": "enabled", 00:13:34.204 "thread": "nvmf_tgt_poll_group_000" 00:13:34.204 } 00:13:34.204 ]' 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.204 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.462 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.462 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.462 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.719 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:34.719 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:35.284 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.542 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.108 00:13:36.108 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.108 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.108 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.367 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.367 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.367 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.367 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.367 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.367 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.367 { 00:13:36.367 "auth": { 00:13:36.367 "dhgroup": "ffdhe4096", 00:13:36.367 "digest": "sha384", 00:13:36.367 "state": "completed" 00:13:36.367 }, 00:13:36.367 "cntlid": 75, 00:13:36.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:36.367 "listen_address": { 00:13:36.367 "adrfam": "IPv4", 00:13:36.367 "traddr": "10.0.0.3", 00:13:36.367 "trsvcid": "4420", 00:13:36.367 "trtype": "TCP" 00:13:36.367 }, 00:13:36.367 "peer_address": { 00:13:36.367 "adrfam": "IPv4", 00:13:36.367 "traddr": "10.0.0.1", 00:13:36.367 "trsvcid": "43498", 00:13:36.367 "trtype": "TCP" 00:13:36.367 }, 00:13:36.367 "qid": 0, 00:13:36.367 "state": "enabled", 00:13:36.367 "thread": "nvmf_tgt_poll_group_000" 00:13:36.367 } 00:13:36.367 ]' 00:13:36.367 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.626 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.626 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.626 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:36.626 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.626 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.626 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.626 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.884 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:36.884 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:37.450 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.727 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.009 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.009 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.009 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.009 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.268 00:13:38.268 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.268 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.268 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.527 { 00:13:38.527 "auth": { 00:13:38.527 "dhgroup": "ffdhe4096", 00:13:38.527 "digest": "sha384", 00:13:38.527 "state": "completed" 00:13:38.527 }, 00:13:38.527 "cntlid": 77, 00:13:38.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:38.527 "listen_address": { 00:13:38.527 "adrfam": "IPv4", 00:13:38.527 "traddr": "10.0.0.3", 00:13:38.527 "trsvcid": "4420", 00:13:38.527 "trtype": "TCP" 00:13:38.527 }, 00:13:38.527 "peer_address": { 00:13:38.527 "adrfam": "IPv4", 00:13:38.527 "traddr": "10.0.0.1", 00:13:38.527 "trsvcid": "43516", 00:13:38.527 "trtype": "TCP" 00:13:38.527 }, 00:13:38.527 "qid": 0, 00:13:38.527 "state": "enabled", 00:13:38.527 "thread": "nvmf_tgt_poll_group_000" 00:13:38.527 } 00:13:38.527 ]' 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:38.527 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.785 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.785 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.785 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.044 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:39.044 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:39.611 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.869 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.434 00:13:40.434 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.434 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.434 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.693 { 00:13:40.693 "auth": { 00:13:40.693 "dhgroup": "ffdhe4096", 00:13:40.693 "digest": "sha384", 00:13:40.693 "state": "completed" 00:13:40.693 }, 00:13:40.693 "cntlid": 79, 00:13:40.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:40.693 "listen_address": { 00:13:40.693 "adrfam": "IPv4", 00:13:40.693 "traddr": "10.0.0.3", 00:13:40.693 "trsvcid": "4420", 00:13:40.693 "trtype": "TCP" 00:13:40.693 }, 00:13:40.693 "peer_address": { 00:13:40.693 "adrfam": "IPv4", 00:13:40.693 "traddr": "10.0.0.1", 00:13:40.693 "trsvcid": "43548", 00:13:40.693 "trtype": "TCP" 00:13:40.693 }, 00:13:40.693 "qid": 0, 00:13:40.693 "state": "enabled", 00:13:40.693 "thread": "nvmf_tgt_poll_group_000" 00:13:40.693 } 00:13:40.693 ]' 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.693 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:40.693 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.693 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.693 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.693 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.260 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:41.260 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.826 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:41.827 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.085 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.652 00:13:42.652 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.652 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.652 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.910 { 00:13:42.910 "auth": { 00:13:42.910 "dhgroup": "ffdhe6144", 00:13:42.910 "digest": "sha384", 00:13:42.910 "state": "completed" 00:13:42.910 }, 00:13:42.910 "cntlid": 81, 00:13:42.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:42.910 "listen_address": { 00:13:42.910 "adrfam": "IPv4", 00:13:42.910 "traddr": "10.0.0.3", 00:13:42.910 "trsvcid": "4420", 00:13:42.910 "trtype": "TCP" 00:13:42.910 }, 00:13:42.910 "peer_address": { 00:13:42.910 "adrfam": "IPv4", 00:13:42.910 "traddr": "10.0.0.1", 00:13:42.910 "trsvcid": "49022", 00:13:42.910 "trtype": "TCP" 00:13:42.910 }, 00:13:42.910 "qid": 0, 00:13:42.910 "state": "enabled", 00:13:42.910 "thread": "nvmf_tgt_poll_group_000" 00:13:42.910 } 00:13:42.910 ]' 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.910 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.911 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:43.169 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.169 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.169 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.169 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.427 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:43.427 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:43.995 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.995 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:43.995 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.995 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.254 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.254 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.254 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:44.254 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.513 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.080 00:13:45.080 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.080 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.080 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.339 { 00:13:45.339 "auth": { 00:13:45.339 "dhgroup": "ffdhe6144", 00:13:45.339 "digest": "sha384", 00:13:45.339 "state": "completed" 00:13:45.339 }, 00:13:45.339 "cntlid": 83, 00:13:45.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:45.339 "listen_address": { 00:13:45.339 "adrfam": "IPv4", 00:13:45.339 "traddr": "10.0.0.3", 00:13:45.339 "trsvcid": "4420", 00:13:45.339 "trtype": "TCP" 00:13:45.339 }, 00:13:45.339 "peer_address": { 00:13:45.339 "adrfam": "IPv4", 00:13:45.339 "traddr": "10.0.0.1", 00:13:45.339 "trsvcid": "49064", 00:13:45.339 "trtype": "TCP" 00:13:45.339 }, 00:13:45.339 "qid": 0, 00:13:45.339 "state": "enabled", 00:13:45.339 "thread": "nvmf_tgt_poll_group_000" 00:13:45.339 } 00:13:45.339 ]' 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.339 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.643 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:45.643 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:46.598 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.598 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.858 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.858 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.858 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.858 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.116 00:13:47.375 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.375 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.375 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.634 { 00:13:47.634 "auth": { 00:13:47.634 "dhgroup": "ffdhe6144", 00:13:47.634 "digest": "sha384", 00:13:47.634 "state": "completed" 00:13:47.634 }, 00:13:47.634 "cntlid": 85, 00:13:47.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:47.634 "listen_address": { 00:13:47.634 "adrfam": "IPv4", 00:13:47.634 "traddr": "10.0.0.3", 00:13:47.634 "trsvcid": "4420", 00:13:47.634 "trtype": "TCP" 00:13:47.634 }, 00:13:47.634 "peer_address": { 00:13:47.634 "adrfam": "IPv4", 00:13:47.634 "traddr": "10.0.0.1", 00:13:47.634 "trsvcid": "49082", 00:13:47.634 "trtype": "TCP" 00:13:47.634 }, 00:13:47.634 "qid": 0, 00:13:47.634 "state": "enabled", 00:13:47.634 "thread": "nvmf_tgt_poll_group_000" 00:13:47.634 } 00:13:47.634 ]' 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.634 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.201 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:48.201 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:48.768 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.768 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:48.768 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.768 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.768 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.768 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.768 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:48.768 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.026 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.593 00:13:49.593 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.593 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.593 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.878 { 00:13:49.878 "auth": { 00:13:49.878 "dhgroup": "ffdhe6144", 00:13:49.878 "digest": "sha384", 00:13:49.878 "state": "completed" 00:13:49.878 }, 00:13:49.878 "cntlid": 87, 00:13:49.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:49.878 "listen_address": { 00:13:49.878 "adrfam": "IPv4", 00:13:49.878 "traddr": "10.0.0.3", 00:13:49.878 "trsvcid": "4420", 00:13:49.878 "trtype": "TCP" 00:13:49.878 }, 00:13:49.878 "peer_address": { 00:13:49.878 "adrfam": "IPv4", 00:13:49.878 "traddr": "10.0.0.1", 00:13:49.878 "trsvcid": "49116", 00:13:49.878 "trtype": "TCP" 00:13:49.878 }, 00:13:49.878 "qid": 0, 00:13:49.878 "state": "enabled", 00:13:49.878 "thread": "nvmf_tgt_poll_group_000" 00:13:49.878 } 00:13:49.878 ]' 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.878 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:49.879 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.157 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.157 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.157 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.415 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:50.415 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:50.982 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:50.983 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.549 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.116 00:13:52.116 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.116 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.116 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.375 { 00:13:52.375 "auth": { 00:13:52.375 "dhgroup": "ffdhe8192", 00:13:52.375 "digest": "sha384", 00:13:52.375 "state": "completed" 00:13:52.375 }, 00:13:52.375 "cntlid": 89, 00:13:52.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:52.375 "listen_address": { 00:13:52.375 "adrfam": "IPv4", 00:13:52.375 "traddr": "10.0.0.3", 00:13:52.375 "trsvcid": "4420", 00:13:52.375 "trtype": "TCP" 00:13:52.375 }, 00:13:52.375 "peer_address": { 00:13:52.375 "adrfam": "IPv4", 00:13:52.375 "traddr": "10.0.0.1", 00:13:52.375 "trsvcid": "49146", 00:13:52.375 "trtype": "TCP" 00:13:52.375 }, 00:13:52.375 "qid": 0, 00:13:52.375 "state": "enabled", 00:13:52.375 "thread": "nvmf_tgt_poll_group_000" 00:13:52.375 } 00:13:52.375 ]' 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:52.375 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.634 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.634 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.634 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.892 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:52.892 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:13:53.457 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.458 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:53.458 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.458 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.458 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.458 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.458 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:53.458 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:53.715 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:53.715 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.715 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:53.715 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:53.715 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:53.715 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.715 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.716 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.716 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.716 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.716 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.716 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.716 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.650 00:13:54.650 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.650 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.650 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.650 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.650 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.650 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.650 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.650 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.650 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.650 { 00:13:54.650 "auth": { 00:13:54.650 "dhgroup": "ffdhe8192", 00:13:54.650 "digest": "sha384", 00:13:54.650 "state": "completed" 00:13:54.650 }, 00:13:54.650 "cntlid": 91, 00:13:54.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:54.650 "listen_address": { 00:13:54.650 "adrfam": "IPv4", 00:13:54.650 "traddr": "10.0.0.3", 00:13:54.650 "trsvcid": "4420", 00:13:54.650 "trtype": "TCP" 00:13:54.650 }, 00:13:54.650 "peer_address": { 00:13:54.650 "adrfam": "IPv4", 00:13:54.650 "traddr": "10.0.0.1", 00:13:54.650 "trsvcid": "50756", 00:13:54.650 "trtype": "TCP" 00:13:54.650 }, 00:13:54.650 "qid": 0, 00:13:54.650 "state": "enabled", 00:13:54.650 "thread": "nvmf_tgt_poll_group_000" 00:13:54.650 } 00:13:54.650 ]' 00:13:54.650 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.922 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.922 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.922 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.922 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.922 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.922 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.922 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.181 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:55.181 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.116 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.117 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.117 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.117 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.117 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.051 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.052 { 00:13:57.052 "auth": { 00:13:57.052 "dhgroup": "ffdhe8192", 00:13:57.052 "digest": "sha384", 00:13:57.052 "state": "completed" 00:13:57.052 }, 00:13:57.052 "cntlid": 93, 00:13:57.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:57.052 "listen_address": { 00:13:57.052 "adrfam": "IPv4", 00:13:57.052 "traddr": "10.0.0.3", 00:13:57.052 "trsvcid": "4420", 00:13:57.052 "trtype": "TCP" 00:13:57.052 }, 00:13:57.052 "peer_address": { 00:13:57.052 "adrfam": "IPv4", 00:13:57.052 "traddr": "10.0.0.1", 00:13:57.052 "trsvcid": "50778", 00:13:57.052 "trtype": "TCP" 00:13:57.052 }, 00:13:57.052 "qid": 0, 00:13:57.052 "state": "enabled", 00:13:57.052 "thread": "nvmf_tgt_poll_group_000" 00:13:57.052 } 00:13:57.052 ]' 00:13:57.052 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.310 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.310 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.310 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:57.310 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.310 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.310 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.310 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.569 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:57.569 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:58.136 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.395 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.331 00:13:59.331 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.331 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.331 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.589 { 00:13:59.589 "auth": { 00:13:59.589 "dhgroup": "ffdhe8192", 00:13:59.589 "digest": "sha384", 00:13:59.589 "state": "completed" 00:13:59.589 }, 00:13:59.589 "cntlid": 95, 00:13:59.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:13:59.589 "listen_address": { 00:13:59.589 "adrfam": "IPv4", 00:13:59.589 "traddr": "10.0.0.3", 00:13:59.589 "trsvcid": "4420", 00:13:59.589 "trtype": "TCP" 00:13:59.589 }, 00:13:59.589 "peer_address": { 00:13:59.589 "adrfam": "IPv4", 00:13:59.589 "traddr": "10.0.0.1", 00:13:59.589 "trsvcid": "50812", 00:13:59.589 "trtype": "TCP" 00:13:59.589 }, 00:13:59.589 "qid": 0, 00:13:59.589 "state": "enabled", 00:13:59.589 "thread": "nvmf_tgt_poll_group_000" 00:13:59.589 } 00:13:59.589 ]' 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.589 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.849 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:13:59.849 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:00.799 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.799 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.057 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.058 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.058 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.058 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.316 00:14:01.316 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.316 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.316 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.574 { 00:14:01.574 "auth": { 00:14:01.574 "dhgroup": "null", 00:14:01.574 "digest": "sha512", 00:14:01.574 "state": "completed" 00:14:01.574 }, 00:14:01.574 "cntlid": 97, 00:14:01.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:01.574 "listen_address": { 00:14:01.574 "adrfam": "IPv4", 00:14:01.574 "traddr": "10.0.0.3", 00:14:01.574 "trsvcid": "4420", 00:14:01.574 "trtype": "TCP" 00:14:01.574 }, 00:14:01.574 "peer_address": { 00:14:01.574 "adrfam": "IPv4", 00:14:01.574 "traddr": "10.0.0.1", 00:14:01.574 "trsvcid": "50838", 00:14:01.574 "trtype": "TCP" 00:14:01.574 }, 00:14:01.574 "qid": 0, 00:14:01.574 "state": "enabled", 00:14:01.574 "thread": "nvmf_tgt_poll_group_000" 00:14:01.574 } 00:14:01.574 ]' 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.574 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.832 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:01.832 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.832 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.832 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.832 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.090 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:02.090 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:02.656 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.222 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.481 00:14:03.481 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.481 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.481 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.740 { 00:14:03.740 "auth": { 00:14:03.740 "dhgroup": "null", 00:14:03.740 "digest": "sha512", 00:14:03.740 "state": "completed" 00:14:03.740 }, 00:14:03.740 "cntlid": 99, 00:14:03.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:03.740 "listen_address": { 00:14:03.740 "adrfam": "IPv4", 00:14:03.740 "traddr": "10.0.0.3", 00:14:03.740 "trsvcid": "4420", 00:14:03.740 "trtype": "TCP" 00:14:03.740 }, 00:14:03.740 "peer_address": { 00:14:03.740 "adrfam": "IPv4", 00:14:03.740 "traddr": "10.0.0.1", 00:14:03.740 "trsvcid": "36048", 00:14:03.740 "trtype": "TCP" 00:14:03.740 }, 00:14:03.740 "qid": 0, 00:14:03.740 "state": "enabled", 00:14:03.740 "thread": "nvmf_tgt_poll_group_000" 00:14:03.740 } 00:14:03.740 ]' 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:03.740 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.998 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.998 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.998 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.256 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:04.256 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:04.822 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.080 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.338 00:14:05.338 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.338 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.338 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.910 { 00:14:05.910 "auth": { 00:14:05.910 "dhgroup": "null", 00:14:05.910 "digest": "sha512", 00:14:05.910 "state": "completed" 00:14:05.910 }, 00:14:05.910 "cntlid": 101, 00:14:05.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:05.910 "listen_address": { 00:14:05.910 "adrfam": "IPv4", 00:14:05.910 "traddr": "10.0.0.3", 00:14:05.910 "trsvcid": "4420", 00:14:05.910 "trtype": "TCP" 00:14:05.910 }, 00:14:05.910 "peer_address": { 00:14:05.910 "adrfam": "IPv4", 00:14:05.910 "traddr": "10.0.0.1", 00:14:05.910 "trsvcid": "36082", 00:14:05.910 "trtype": "TCP" 00:14:05.910 }, 00:14:05.910 "qid": 0, 00:14:05.910 "state": "enabled", 00:14:05.910 "thread": "nvmf_tgt_poll_group_000" 00:14:05.910 } 00:14:05.910 ]' 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:05.910 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.911 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.911 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.911 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.170 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:06.170 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:06.736 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:06.995 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:06.995 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.995 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:06.995 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:06.995 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:06.995 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.995 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:14:07.253 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.253 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.253 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.253 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:07.253 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.253 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.512 00:14:07.512 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.512 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.512 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.770 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.770 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.770 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.770 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.770 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.770 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.770 { 00:14:07.770 "auth": { 00:14:07.770 "dhgroup": "null", 00:14:07.770 "digest": "sha512", 00:14:07.770 "state": "completed" 00:14:07.770 }, 00:14:07.770 "cntlid": 103, 00:14:07.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:07.770 "listen_address": { 00:14:07.770 "adrfam": "IPv4", 00:14:07.770 "traddr": "10.0.0.3", 00:14:07.770 "trsvcid": "4420", 00:14:07.770 "trtype": "TCP" 00:14:07.770 }, 00:14:07.770 "peer_address": { 00:14:07.770 "adrfam": "IPv4", 00:14:07.770 "traddr": "10.0.0.1", 00:14:07.770 "trsvcid": "36114", 00:14:07.771 "trtype": "TCP" 00:14:07.771 }, 00:14:07.771 "qid": 0, 00:14:07.771 "state": "enabled", 00:14:07.771 "thread": "nvmf_tgt_poll_group_000" 00:14:07.771 } 00:14:07.771 ]' 00:14:07.771 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.771 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.771 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.771 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:07.771 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.029 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.029 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.029 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.288 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:08.288 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:08.855 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.114 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.373 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.373 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.373 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.373 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.632 00:14:09.632 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.632 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.632 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.892 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.892 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.892 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.892 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.892 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.892 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.892 { 00:14:09.892 "auth": { 00:14:09.892 "dhgroup": "ffdhe2048", 00:14:09.892 "digest": "sha512", 00:14:09.892 "state": "completed" 00:14:09.892 }, 00:14:09.892 "cntlid": 105, 00:14:09.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:09.892 "listen_address": { 00:14:09.892 "adrfam": "IPv4", 00:14:09.892 "traddr": "10.0.0.3", 00:14:09.892 "trsvcid": "4420", 00:14:09.892 "trtype": "TCP" 00:14:09.892 }, 00:14:09.892 "peer_address": { 00:14:09.892 "adrfam": "IPv4", 00:14:09.892 "traddr": "10.0.0.1", 00:14:09.892 "trsvcid": "36134", 00:14:09.892 "trtype": "TCP" 00:14:09.892 }, 00:14:09.892 "qid": 0, 00:14:09.892 "state": "enabled", 00:14:09.892 "thread": "nvmf_tgt_poll_group_000" 00:14:09.892 } 00:14:09.892 ]' 00:14:09.892 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.153 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.153 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.153 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.153 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.153 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.153 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.153 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.411 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:10.411 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:10.978 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.238 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:11.238 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.238 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.238 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.238 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.238 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:11.238 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.497 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.755 00:14:11.755 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.755 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.755 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.013 { 00:14:12.013 "auth": { 00:14:12.013 "dhgroup": "ffdhe2048", 00:14:12.013 "digest": "sha512", 00:14:12.013 "state": "completed" 00:14:12.013 }, 00:14:12.013 "cntlid": 107, 00:14:12.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:12.013 "listen_address": { 00:14:12.013 "adrfam": "IPv4", 00:14:12.013 "traddr": "10.0.0.3", 00:14:12.013 "trsvcid": "4420", 00:14:12.013 "trtype": "TCP" 00:14:12.013 }, 00:14:12.013 "peer_address": { 00:14:12.013 "adrfam": "IPv4", 00:14:12.013 "traddr": "10.0.0.1", 00:14:12.013 "trsvcid": "36164", 00:14:12.013 "trtype": "TCP" 00:14:12.013 }, 00:14:12.013 "qid": 0, 00:14:12.013 "state": "enabled", 00:14:12.013 "thread": "nvmf_tgt_poll_group_000" 00:14:12.013 } 00:14:12.013 ]' 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:12.013 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.272 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:12.272 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.272 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.272 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.272 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.531 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:12.531 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:13.098 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.357 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.924 00:14:13.924 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.924 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.924 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.183 { 00:14:14.183 "auth": { 00:14:14.183 "dhgroup": "ffdhe2048", 00:14:14.183 "digest": "sha512", 00:14:14.183 "state": "completed" 00:14:14.183 }, 00:14:14.183 "cntlid": 109, 00:14:14.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:14.183 "listen_address": { 00:14:14.183 "adrfam": "IPv4", 00:14:14.183 "traddr": "10.0.0.3", 00:14:14.183 "trsvcid": "4420", 00:14:14.183 "trtype": "TCP" 00:14:14.183 }, 00:14:14.183 "peer_address": { 00:14:14.183 "adrfam": "IPv4", 00:14:14.183 "traddr": "10.0.0.1", 00:14:14.183 "trsvcid": "35250", 00:14:14.183 "trtype": "TCP" 00:14:14.183 }, 00:14:14.183 "qid": 0, 00:14:14.183 "state": "enabled", 00:14:14.183 "thread": "nvmf_tgt_poll_group_000" 00:14:14.183 } 00:14:14.183 ]' 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.183 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.751 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:14.751 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:15.318 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.577 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.144 00:14:16.144 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.144 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.144 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.409 { 00:14:16.409 "auth": { 00:14:16.409 "dhgroup": "ffdhe2048", 00:14:16.409 "digest": "sha512", 00:14:16.409 "state": "completed" 00:14:16.409 }, 00:14:16.409 "cntlid": 111, 00:14:16.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:16.409 "listen_address": { 00:14:16.409 "adrfam": "IPv4", 00:14:16.409 "traddr": "10.0.0.3", 00:14:16.409 "trsvcid": "4420", 00:14:16.409 "trtype": "TCP" 00:14:16.409 }, 00:14:16.409 "peer_address": { 00:14:16.409 "adrfam": "IPv4", 00:14:16.409 "traddr": "10.0.0.1", 00:14:16.409 "trsvcid": "35270", 00:14:16.409 "trtype": "TCP" 00:14:16.409 }, 00:14:16.409 "qid": 0, 00:14:16.409 "state": "enabled", 00:14:16.409 "thread": "nvmf_tgt_poll_group_000" 00:14:16.409 } 00:14:16.409 ]' 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.409 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.678 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:16.678 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.614 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.615 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.182 00:14:18.182 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.182 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.182 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.441 { 00:14:18.441 "auth": { 00:14:18.441 "dhgroup": "ffdhe3072", 00:14:18.441 "digest": "sha512", 00:14:18.441 "state": "completed" 00:14:18.441 }, 00:14:18.441 "cntlid": 113, 00:14:18.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:18.441 "listen_address": { 00:14:18.441 "adrfam": "IPv4", 00:14:18.441 "traddr": "10.0.0.3", 00:14:18.441 "trsvcid": "4420", 00:14:18.441 "trtype": "TCP" 00:14:18.441 }, 00:14:18.441 "peer_address": { 00:14:18.441 "adrfam": "IPv4", 00:14:18.441 "traddr": "10.0.0.1", 00:14:18.441 "trsvcid": "35296", 00:14:18.441 "trtype": "TCP" 00:14:18.441 }, 00:14:18.441 "qid": 0, 00:14:18.441 "state": "enabled", 00:14:18.441 "thread": "nvmf_tgt_poll_group_000" 00:14:18.441 } 00:14:18.441 ]' 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.441 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.699 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:18.699 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:19.636 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.895 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.153 00:14:20.153 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.153 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.153 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.412 { 00:14:20.412 "auth": { 00:14:20.412 "dhgroup": "ffdhe3072", 00:14:20.412 "digest": "sha512", 00:14:20.412 "state": "completed" 00:14:20.412 }, 00:14:20.412 "cntlid": 115, 00:14:20.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:20.412 "listen_address": { 00:14:20.412 "adrfam": "IPv4", 00:14:20.412 "traddr": "10.0.0.3", 00:14:20.412 "trsvcid": "4420", 00:14:20.412 "trtype": "TCP" 00:14:20.412 }, 00:14:20.412 "peer_address": { 00:14:20.412 "adrfam": "IPv4", 00:14:20.412 "traddr": "10.0.0.1", 00:14:20.412 "trsvcid": "35314", 00:14:20.412 "trtype": "TCP" 00:14:20.412 }, 00:14:20.412 "qid": 0, 00:14:20.412 "state": "enabled", 00:14:20.412 "thread": "nvmf_tgt_poll_group_000" 00:14:20.412 } 00:14:20.412 ]' 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.412 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.671 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.671 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.671 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.671 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.671 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.929 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:20.929 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:21.559 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.126 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.384 00:14:22.384 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.384 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.384 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.642 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.642 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.642 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.642 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.642 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.642 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.642 { 00:14:22.642 "auth": { 00:14:22.642 "dhgroup": "ffdhe3072", 00:14:22.642 "digest": "sha512", 00:14:22.642 "state": "completed" 00:14:22.642 }, 00:14:22.642 "cntlid": 117, 00:14:22.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:22.642 "listen_address": { 00:14:22.642 "adrfam": "IPv4", 00:14:22.642 "traddr": "10.0.0.3", 00:14:22.642 "trsvcid": "4420", 00:14:22.642 "trtype": "TCP" 00:14:22.642 }, 00:14:22.642 "peer_address": { 00:14:22.642 "adrfam": "IPv4", 00:14:22.642 "traddr": "10.0.0.1", 00:14:22.642 "trsvcid": "35340", 00:14:22.642 "trtype": "TCP" 00:14:22.642 }, 00:14:22.642 "qid": 0, 00:14:22.642 "state": "enabled", 00:14:22.642 "thread": "nvmf_tgt_poll_group_000" 00:14:22.642 } 00:14:22.642 ]' 00:14:22.642 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.901 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.901 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.901 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.901 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.901 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.901 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.901 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.159 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:23.159 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.092 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.658 00:14:24.658 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.658 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.658 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.916 { 00:14:24.916 "auth": { 00:14:24.916 "dhgroup": "ffdhe3072", 00:14:24.916 "digest": "sha512", 00:14:24.916 "state": "completed" 00:14:24.916 }, 00:14:24.916 "cntlid": 119, 00:14:24.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:24.916 "listen_address": { 00:14:24.916 "adrfam": "IPv4", 00:14:24.916 "traddr": "10.0.0.3", 00:14:24.916 "trsvcid": "4420", 00:14:24.916 "trtype": "TCP" 00:14:24.916 }, 00:14:24.916 "peer_address": { 00:14:24.916 "adrfam": "IPv4", 00:14:24.916 "traddr": "10.0.0.1", 00:14:24.916 "trsvcid": "50054", 00:14:24.916 "trtype": "TCP" 00:14:24.916 }, 00:14:24.916 "qid": 0, 00:14:24.916 "state": "enabled", 00:14:24.916 "thread": "nvmf_tgt_poll_group_000" 00:14:24.916 } 00:14:24.916 ]' 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.916 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.917 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.917 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.482 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:25.482 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:26.049 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.308 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.875 00:14:26.875 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.875 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.875 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.133 { 00:14:27.133 "auth": { 00:14:27.133 "dhgroup": "ffdhe4096", 00:14:27.133 "digest": "sha512", 00:14:27.133 "state": "completed" 00:14:27.133 }, 00:14:27.133 "cntlid": 121, 00:14:27.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:27.133 "listen_address": { 00:14:27.133 "adrfam": "IPv4", 00:14:27.133 "traddr": "10.0.0.3", 00:14:27.133 "trsvcid": "4420", 00:14:27.133 "trtype": "TCP" 00:14:27.133 }, 00:14:27.133 "peer_address": { 00:14:27.133 "adrfam": "IPv4", 00:14:27.133 "traddr": "10.0.0.1", 00:14:27.133 "trsvcid": "50094", 00:14:27.133 "trtype": "TCP" 00:14:27.133 }, 00:14:27.133 "qid": 0, 00:14:27.133 "state": "enabled", 00:14:27.133 "thread": "nvmf_tgt_poll_group_000" 00:14:27.133 } 00:14:27.133 ]' 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.133 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.392 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:27.392 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.327 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.893 00:14:28.893 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.893 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.893 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.151 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.151 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.151 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.151 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.151 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.151 { 00:14:29.151 "auth": { 00:14:29.151 "dhgroup": "ffdhe4096", 00:14:29.151 "digest": "sha512", 00:14:29.151 "state": "completed" 00:14:29.151 }, 00:14:29.151 "cntlid": 123, 00:14:29.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:29.151 "listen_address": { 00:14:29.151 "adrfam": "IPv4", 00:14:29.151 "traddr": "10.0.0.3", 00:14:29.152 "trsvcid": "4420", 00:14:29.152 "trtype": "TCP" 00:14:29.152 }, 00:14:29.152 "peer_address": { 00:14:29.152 "adrfam": "IPv4", 00:14:29.152 "traddr": "10.0.0.1", 00:14:29.152 "trsvcid": "50118", 00:14:29.152 "trtype": "TCP" 00:14:29.152 }, 00:14:29.152 "qid": 0, 00:14:29.152 "state": "enabled", 00:14:29.152 "thread": "nvmf_tgt_poll_group_000" 00:14:29.152 } 00:14:29.152 ]' 00:14:29.152 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.152 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.152 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.152 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.152 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.410 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.410 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.410 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.669 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:29.669 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:30.235 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.058 00:14:31.058 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.058 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.058 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.336 { 00:14:31.336 "auth": { 00:14:31.336 "dhgroup": "ffdhe4096", 00:14:31.336 "digest": "sha512", 00:14:31.336 "state": "completed" 00:14:31.336 }, 00:14:31.336 "cntlid": 125, 00:14:31.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:31.336 "listen_address": { 00:14:31.336 "adrfam": "IPv4", 00:14:31.336 "traddr": "10.0.0.3", 00:14:31.336 "trsvcid": "4420", 00:14:31.336 "trtype": "TCP" 00:14:31.336 }, 00:14:31.336 "peer_address": { 00:14:31.336 "adrfam": "IPv4", 00:14:31.336 "traddr": "10.0.0.1", 00:14:31.336 "trsvcid": "50148", 00:14:31.336 "trtype": "TCP" 00:14:31.336 }, 00:14:31.336 "qid": 0, 00:14:31.336 "state": "enabled", 00:14:31.336 "thread": "nvmf_tgt_poll_group_000" 00:14:31.336 } 00:14:31.336 ]' 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.336 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.923 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:31.923 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:32.490 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:32.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:32.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.749 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.315 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.315 { 00:14:33.315 "auth": { 00:14:33.315 "dhgroup": "ffdhe4096", 00:14:33.315 "digest": "sha512", 00:14:33.315 "state": "completed" 00:14:33.315 }, 00:14:33.315 "cntlid": 127, 00:14:33.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:33.315 "listen_address": { 00:14:33.315 "adrfam": "IPv4", 00:14:33.315 "traddr": "10.0.0.3", 00:14:33.315 "trsvcid": "4420", 00:14:33.315 "trtype": "TCP" 00:14:33.315 }, 00:14:33.315 "peer_address": { 00:14:33.315 "adrfam": "IPv4", 00:14:33.315 "traddr": "10.0.0.1", 00:14:33.315 "trsvcid": "35298", 00:14:33.315 "trtype": "TCP" 00:14:33.315 }, 00:14:33.315 "qid": 0, 00:14:33.315 "state": "enabled", 00:14:33.315 "thread": "nvmf_tgt_poll_group_000" 00:14:33.315 } 00:14:33.315 ]' 00:14:33.315 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.574 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.574 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.574 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.574 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.574 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.574 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.574 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.832 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:33.832 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:34.399 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.657 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.224 00:14:35.224 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.224 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.224 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.482 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.482 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.482 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.482 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.482 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.482 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.482 { 00:14:35.482 "auth": { 00:14:35.482 "dhgroup": "ffdhe6144", 00:14:35.482 "digest": "sha512", 00:14:35.482 "state": "completed" 00:14:35.482 }, 00:14:35.482 "cntlid": 129, 00:14:35.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:35.482 "listen_address": { 00:14:35.482 "adrfam": "IPv4", 00:14:35.482 "traddr": "10.0.0.3", 00:14:35.482 "trsvcid": "4420", 00:14:35.482 "trtype": "TCP" 00:14:35.482 }, 00:14:35.482 "peer_address": { 00:14:35.482 "adrfam": "IPv4", 00:14:35.482 "traddr": "10.0.0.1", 00:14:35.482 "trsvcid": "35326", 00:14:35.482 "trtype": "TCP" 00:14:35.482 }, 00:14:35.482 "qid": 0, 00:14:35.482 "state": "enabled", 00:14:35.482 "thread": "nvmf_tgt_poll_group_000" 00:14:35.482 } 00:14:35.482 ]' 00:14:35.482 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.740 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.740 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.740 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.740 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.741 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.741 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.741 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.999 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:35.999 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.566 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.137 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.138 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.403 00:14:37.403 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.403 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.403 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.661 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.661 { 00:14:37.661 "auth": { 00:14:37.661 "dhgroup": "ffdhe6144", 00:14:37.661 "digest": "sha512", 00:14:37.661 "state": "completed" 00:14:37.661 }, 00:14:37.661 "cntlid": 131, 00:14:37.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:37.661 "listen_address": { 00:14:37.661 "adrfam": "IPv4", 00:14:37.661 "traddr": "10.0.0.3", 00:14:37.661 "trsvcid": "4420", 00:14:37.661 "trtype": "TCP" 00:14:37.661 }, 00:14:37.661 "peer_address": { 00:14:37.661 "adrfam": "IPv4", 00:14:37.661 "traddr": "10.0.0.1", 00:14:37.661 "trsvcid": "35352", 00:14:37.661 "trtype": "TCP" 00:14:37.661 }, 00:14:37.661 "qid": 0, 00:14:37.661 "state": "enabled", 00:14:37.661 "thread": "nvmf_tgt_poll_group_000" 00:14:37.661 } 00:14:37.661 ]' 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.661 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.919 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.919 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.919 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.919 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.919 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.178 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:38.178 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:38.744 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.310 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.568 00:14:39.568 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.568 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.568 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.140 { 00:14:40.140 "auth": { 00:14:40.140 "dhgroup": "ffdhe6144", 00:14:40.140 "digest": "sha512", 00:14:40.140 "state": "completed" 00:14:40.140 }, 00:14:40.140 "cntlid": 133, 00:14:40.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:40.140 "listen_address": { 00:14:40.140 "adrfam": "IPv4", 00:14:40.140 "traddr": "10.0.0.3", 00:14:40.140 "trsvcid": "4420", 00:14:40.140 "trtype": "TCP" 00:14:40.140 }, 00:14:40.140 "peer_address": { 00:14:40.140 "adrfam": "IPv4", 00:14:40.140 "traddr": "10.0.0.1", 00:14:40.140 "trsvcid": "35396", 00:14:40.140 "trtype": "TCP" 00:14:40.140 }, 00:14:40.140 "qid": 0, 00:14:40.140 "state": "enabled", 00:14:40.140 "thread": "nvmf_tgt_poll_group_000" 00:14:40.140 } 00:14:40.140 ]' 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.140 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.399 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:40.399 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:41.335 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:41.593 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.594 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.160 00:14:42.160 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.160 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.160 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.418 { 00:14:42.418 "auth": { 00:14:42.418 "dhgroup": "ffdhe6144", 00:14:42.418 "digest": "sha512", 00:14:42.418 "state": "completed" 00:14:42.418 }, 00:14:42.418 "cntlid": 135, 00:14:42.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:42.418 "listen_address": { 00:14:42.418 "adrfam": "IPv4", 00:14:42.418 "traddr": "10.0.0.3", 00:14:42.418 "trsvcid": "4420", 00:14:42.418 "trtype": "TCP" 00:14:42.418 }, 00:14:42.418 "peer_address": { 00:14:42.418 "adrfam": "IPv4", 00:14:42.418 "traddr": "10.0.0.1", 00:14:42.418 "trsvcid": "35418", 00:14:42.418 "trtype": "TCP" 00:14:42.418 }, 00:14:42.418 "qid": 0, 00:14:42.418 "state": "enabled", 00:14:42.418 "thread": "nvmf_tgt_poll_group_000" 00:14:42.418 } 00:14:42.418 ]' 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.418 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.688 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:42.688 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:43.641 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.899 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.466 00:14:44.466 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.466 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.466 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.724 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.724 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.724 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.724 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.724 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.724 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.724 { 00:14:44.724 "auth": { 00:14:44.724 "dhgroup": "ffdhe8192", 00:14:44.724 "digest": "sha512", 00:14:44.724 "state": "completed" 00:14:44.724 }, 00:14:44.724 "cntlid": 137, 00:14:44.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:44.724 "listen_address": { 00:14:44.724 "adrfam": "IPv4", 00:14:44.724 "traddr": "10.0.0.3", 00:14:44.724 "trsvcid": "4420", 00:14:44.724 "trtype": "TCP" 00:14:44.724 }, 00:14:44.724 "peer_address": { 00:14:44.724 "adrfam": "IPv4", 00:14:44.724 "traddr": "10.0.0.1", 00:14:44.724 "trsvcid": "42128", 00:14:44.724 "trtype": "TCP" 00:14:44.724 }, 00:14:44.724 "qid": 0, 00:14:44.724 "state": "enabled", 00:14:44.724 "thread": "nvmf_tgt_poll_group_000" 00:14:44.724 } 00:14:44.724 ]' 00:14:44.724 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.982 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.982 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.982 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.982 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.982 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.982 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.982 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.241 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:45.241 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:45.808 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.376 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.943 00:14:46.943 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.943 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.943 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.202 { 00:14:47.202 "auth": { 00:14:47.202 "dhgroup": "ffdhe8192", 00:14:47.202 "digest": "sha512", 00:14:47.202 "state": "completed" 00:14:47.202 }, 00:14:47.202 "cntlid": 139, 00:14:47.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:47.202 "listen_address": { 00:14:47.202 "adrfam": "IPv4", 00:14:47.202 "traddr": "10.0.0.3", 00:14:47.202 "trsvcid": "4420", 00:14:47.202 "trtype": "TCP" 00:14:47.202 }, 00:14:47.202 "peer_address": { 00:14:47.202 "adrfam": "IPv4", 00:14:47.202 "traddr": "10.0.0.1", 00:14:47.202 "trsvcid": "42138", 00:14:47.202 "trtype": "TCP" 00:14:47.202 }, 00:14:47.202 "qid": 0, 00:14:47.202 "state": "enabled", 00:14:47.202 "thread": "nvmf_tgt_poll_group_000" 00:14:47.202 } 00:14:47.202 ]' 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.202 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.461 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:47.461 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: --dhchap-ctrl-secret DHHC-1:02:YTUxOWZiNGFjNWU4MGE3MTBhNDczYjdhM2M2NmVmYmQyNGI1NzY1NDYyOTg1YjBikuuk6g==: 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.406 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.407 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.407 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.407 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.407 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.407 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.665 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.665 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.231 00:14:49.231 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.231 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.231 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.492 { 00:14:49.492 "auth": { 00:14:49.492 "dhgroup": "ffdhe8192", 00:14:49.492 "digest": "sha512", 00:14:49.492 "state": "completed" 00:14:49.492 }, 00:14:49.492 "cntlid": 141, 00:14:49.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:49.492 "listen_address": { 00:14:49.492 "adrfam": "IPv4", 00:14:49.492 "traddr": "10.0.0.3", 00:14:49.492 "trsvcid": "4420", 00:14:49.492 "trtype": "TCP" 00:14:49.492 }, 00:14:49.492 "peer_address": { 00:14:49.492 "adrfam": "IPv4", 00:14:49.492 "traddr": "10.0.0.1", 00:14:49.492 "trsvcid": "42180", 00:14:49.492 "trtype": "TCP" 00:14:49.492 }, 00:14:49.492 "qid": 0, 00:14:49.492 "state": "enabled", 00:14:49.492 "thread": "nvmf_tgt_poll_group_000" 00:14:49.492 } 00:14:49.492 ]' 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.492 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.755 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.755 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.755 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.012 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:50.012 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:01:ZmM3MjgzODAxZDlmOWUyODlhMzZkYjRkYjM3YzEwZmYhg7Vv: 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:50.577 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.835 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.093 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.093 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.658 00:14:51.658 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.659 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.659 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.917 { 00:14:51.917 "auth": { 00:14:51.917 "dhgroup": "ffdhe8192", 00:14:51.917 "digest": "sha512", 00:14:51.917 "state": "completed" 00:14:51.917 }, 00:14:51.917 "cntlid": 143, 00:14:51.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:51.917 "listen_address": { 00:14:51.917 "adrfam": "IPv4", 00:14:51.917 "traddr": "10.0.0.3", 00:14:51.917 "trsvcid": "4420", 00:14:51.917 "trtype": "TCP" 00:14:51.917 }, 00:14:51.917 "peer_address": { 00:14:51.917 "adrfam": "IPv4", 00:14:51.917 "traddr": "10.0.0.1", 00:14:51.917 "trsvcid": "42208", 00:14:51.917 "trtype": "TCP" 00:14:51.917 }, 00:14:51.917 "qid": 0, 00:14:51.917 "state": "enabled", 00:14:51.917 "thread": "nvmf_tgt_poll_group_000" 00:14:51.917 } 00:14:51.917 ]' 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.917 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.175 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.175 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.175 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.433 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:52.433 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.998 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.254 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.511 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.511 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.077 00:14:54.077 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.077 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.077 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.336 { 00:14:54.336 "auth": { 00:14:54.336 "dhgroup": "ffdhe8192", 00:14:54.336 "digest": "sha512", 00:14:54.336 "state": "completed" 00:14:54.336 }, 00:14:54.336 "cntlid": 145, 00:14:54.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:54.336 "listen_address": { 00:14:54.336 "adrfam": "IPv4", 00:14:54.336 "traddr": "10.0.0.3", 00:14:54.336 "trsvcid": "4420", 00:14:54.336 "trtype": "TCP" 00:14:54.336 }, 00:14:54.336 "peer_address": { 00:14:54.336 "adrfam": "IPv4", 00:14:54.336 "traddr": "10.0.0.1", 00:14:54.336 "trsvcid": "54682", 00:14:54.336 "trtype": "TCP" 00:14:54.336 }, 00:14:54.336 "qid": 0, 00:14:54.336 "state": "enabled", 00:14:54.336 "thread": "nvmf_tgt_poll_group_000" 00:14:54.336 } 00:14:54.336 ]' 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.336 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.901 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:54.901 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:00:NDk1NGY1N2MwYWYyNGVjZDA2ZTg3NWYzZjg3Yzk1YWNmM2M1M2FmNTNjODkwYTRkCb88Ww==: --dhchap-ctrl-secret DHHC-1:03:NTE0MmUyNmRjZTExZGQ2NjBkY2NiNDk3ZDhjYjQ3ZDJkOTI0MTQwYTVlODJkYjIwNDYwNTg4NmI3Yjc5ODk1MFfzHmo=: 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:55.467 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:55.468 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:56.034 2024/12/16 11:35:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:56.034 request: 00:14:56.034 { 00:14:56.034 "method": "bdev_nvme_attach_controller", 00:14:56.034 "params": { 00:14:56.034 "name": "nvme0", 00:14:56.034 "trtype": "tcp", 00:14:56.034 "traddr": "10.0.0.3", 00:14:56.034 "adrfam": "ipv4", 00:14:56.034 "trsvcid": "4420", 00:14:56.034 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:56.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:56.034 "prchk_reftag": false, 00:14:56.034 "prchk_guard": false, 00:14:56.034 "hdgst": false, 00:14:56.034 "ddgst": false, 00:14:56.034 "dhchap_key": "key2", 00:14:56.034 "allow_unrecognized_csi": false 00:14:56.034 } 00:14:56.034 } 00:14:56.034 Got JSON-RPC error response 00:14:56.034 GoRPCClient: error on JSON-RPC call 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.034 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.035 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.601 2024/12/16 11:35:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:56.601 request: 00:14:56.601 { 00:14:56.601 "method": "bdev_nvme_attach_controller", 00:14:56.601 "params": { 00:14:56.601 "name": "nvme0", 00:14:56.601 "trtype": "tcp", 00:14:56.601 "traddr": "10.0.0.3", 00:14:56.601 "adrfam": "ipv4", 00:14:56.601 "trsvcid": "4420", 00:14:56.601 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:56.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:56.601 "prchk_reftag": false, 00:14:56.601 "prchk_guard": false, 00:14:56.601 "hdgst": false, 00:14:56.601 "ddgst": false, 00:14:56.601 "dhchap_key": "key1", 00:14:56.601 "dhchap_ctrlr_key": "ckey2", 00:14:56.601 "allow_unrecognized_csi": false 00:14:56.601 } 00:14:56.601 } 00:14:56.601 Got JSON-RPC error response 00:14:56.601 GoRPCClient: error on JSON-RPC call 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.601 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.543 2024/12/16 11:35:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:57.543 request: 00:14:57.543 { 00:14:57.543 "method": "bdev_nvme_attach_controller", 00:14:57.543 "params": { 00:14:57.543 "name": "nvme0", 00:14:57.543 "trtype": "tcp", 00:14:57.543 "traddr": "10.0.0.3", 00:14:57.543 "adrfam": "ipv4", 00:14:57.543 "trsvcid": "4420", 00:14:57.543 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:57.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:57.543 "prchk_reftag": false, 00:14:57.543 "prchk_guard": false, 00:14:57.543 "hdgst": false, 00:14:57.543 "ddgst": false, 00:14:57.543 "dhchap_key": "key1", 00:14:57.543 "dhchap_ctrlr_key": "ckey1", 00:14:57.543 "allow_unrecognized_csi": false 00:14:57.543 } 00:14:57.543 } 00:14:57.543 Got JSON-RPC error response 00:14:57.543 GoRPCClient: error on JSON-RPC call 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 78262 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 78262 ']' 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 78262 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78262 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78262' 00:14:57.543 killing process with pid 78262 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 78262 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 78262 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=83209 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 83209 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83209 ']' 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.543 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.109 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83209 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83209 ']' 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.110 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.368 null0 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lS6 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.wCF ]] 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wCF 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.P3L 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.dW9 ]] 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dW9 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.368 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FkJ 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.8BS ]] 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8BS 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rhy 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.627 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.562 nvme0n1 00:14:59.562 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.562 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.562 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.827 { 00:14:59.827 "auth": { 00:14:59.827 "dhgroup": "ffdhe8192", 00:14:59.827 "digest": "sha512", 00:14:59.827 "state": "completed" 00:14:59.827 }, 00:14:59.827 "cntlid": 1, 00:14:59.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:14:59.827 "listen_address": { 00:14:59.827 "adrfam": "IPv4", 00:14:59.827 "traddr": "10.0.0.3", 00:14:59.827 "trsvcid": "4420", 00:14:59.827 "trtype": "TCP" 00:14:59.827 }, 00:14:59.827 "peer_address": { 00:14:59.827 "adrfam": "IPv4", 00:14:59.827 "traddr": "10.0.0.1", 00:14:59.827 "trsvcid": "54740", 00:14:59.827 "trtype": "TCP" 00:14:59.827 }, 00:14:59.827 "qid": 0, 00:14:59.827 "state": "enabled", 00:14:59.827 "thread": "nvmf_tgt_poll_group_000" 00:14:59.827 } 00:14:59.827 ]' 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.827 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.085 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.085 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.085 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.085 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.085 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.343 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:15:00.343 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:15:00.909 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key3 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:01.167 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.426 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.684 2024/12/16 11:35:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:01.684 request: 00:15:01.684 { 00:15:01.684 "method": "bdev_nvme_attach_controller", 00:15:01.684 "params": { 00:15:01.684 "name": "nvme0", 00:15:01.684 "trtype": "tcp", 00:15:01.684 "traddr": "10.0.0.3", 00:15:01.684 "adrfam": "ipv4", 00:15:01.684 "trsvcid": "4420", 00:15:01.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:01.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:15:01.684 "prchk_reftag": false, 00:15:01.684 "prchk_guard": false, 00:15:01.684 "hdgst": false, 00:15:01.684 "ddgst": false, 00:15:01.684 "dhchap_key": "key3", 00:15:01.684 "allow_unrecognized_csi": false 00:15:01.684 } 00:15:01.684 } 00:15:01.684 Got JSON-RPC error response 00:15:01.684 GoRPCClient: error on JSON-RPC call 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:01.684 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.943 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.203 2024/12/16 11:35:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:02.203 request: 00:15:02.203 { 00:15:02.203 "method": "bdev_nvme_attach_controller", 00:15:02.203 "params": { 00:15:02.203 "name": "nvme0", 00:15:02.203 "trtype": "tcp", 00:15:02.203 "traddr": "10.0.0.3", 00:15:02.203 "adrfam": "ipv4", 00:15:02.203 "trsvcid": "4420", 00:15:02.203 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:02.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:15:02.203 "prchk_reftag": false, 00:15:02.203 "prchk_guard": false, 00:15:02.203 "hdgst": false, 00:15:02.203 "ddgst": false, 00:15:02.203 "dhchap_key": "key3", 00:15:02.203 "allow_unrecognized_csi": false 00:15:02.203 } 00:15:02.203 } 00:15:02.203 Got JSON-RPC error response 00:15:02.203 GoRPCClient: error on JSON-RPC call 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.203 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.462 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.028 2024/12/16 11:35:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:03.028 request: 00:15:03.028 { 00:15:03.028 "method": "bdev_nvme_attach_controller", 00:15:03.028 "params": { 00:15:03.028 "name": "nvme0", 00:15:03.028 "trtype": "tcp", 00:15:03.028 "traddr": "10.0.0.3", 00:15:03.028 "adrfam": "ipv4", 00:15:03.028 "trsvcid": "4420", 00:15:03.028 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:03.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:15:03.028 "prchk_reftag": false, 00:15:03.028 "prchk_guard": false, 00:15:03.028 "hdgst": false, 00:15:03.028 "ddgst": false, 00:15:03.028 "dhchap_key": "key0", 00:15:03.028 "dhchap_ctrlr_key": "key1", 00:15:03.028 "allow_unrecognized_csi": false 00:15:03.028 } 00:15:03.028 } 00:15:03.028 Got JSON-RPC error response 00:15:03.028 GoRPCClient: error on JSON-RPC call 00:15:03.028 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:03.028 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.028 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.028 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.028 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:03.028 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:03.028 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:03.286 nvme0n1 00:15:03.286 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:03.286 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.286 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:03.853 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.853 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.853 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.853 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 00:15:03.853 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.853 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.112 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.112 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:04.112 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:04.112 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:05.046 nvme0n1 00:15:05.046 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:05.046 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.046 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.303 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:05.560 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.561 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:15:05.561 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid 503bc55d-5584-47ba-b171-ff181f376052 -l 0 --dhchap-secret DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: --dhchap-ctrl-secret DHHC-1:03:Njg2NTRiNGE1ZjZmMmNlOTZiZTVkZTljZTAyYzIwZGNmMDYyNDA3YWEwNmNmN2M4NDc1MjY1NjcwZTg5OTU0MbZs/T4=: 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.494 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:06.753 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:07.347 2024/12/16 11:35:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:07.347 request: 00:15:07.347 { 00:15:07.347 "method": "bdev_nvme_attach_controller", 00:15:07.347 "params": { 00:15:07.347 "name": "nvme0", 00:15:07.347 "trtype": "tcp", 00:15:07.347 "traddr": "10.0.0.3", 00:15:07.347 "adrfam": "ipv4", 00:15:07.347 "trsvcid": "4420", 00:15:07.347 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:07.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052", 00:15:07.347 "prchk_reftag": false, 00:15:07.347 "prchk_guard": false, 00:15:07.347 "hdgst": false, 00:15:07.347 "ddgst": false, 00:15:07.347 "dhchap_key": "key1", 00:15:07.347 "allow_unrecognized_csi": false 00:15:07.347 } 00:15:07.347 } 00:15:07.347 Got JSON-RPC error response 00:15:07.347 GoRPCClient: error on JSON-RPC call 00:15:07.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:07.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:07.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:07.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:08.279 nvme0n1 00:15:08.279 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:08.280 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:08.280 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.538 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.538 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.538 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.796 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:15:08.796 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.796 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.796 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.796 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:08.796 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:08.796 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:09.362 nvme0n1 00:15:09.362 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:09.362 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.362 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:09.620 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.620 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.620 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: '' 2s 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: ]] 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDdmYjJiYWE0YmJjOTE3ZDJhNWM4NTU3YjRhNmFiYWa9iBK2: 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:09.878 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: 2s 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: ]] 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTBhZDNkYTg5NGYxNDU0ZjM0YjMyMjc4NTIzNTI4OTJiZjM5Mzk1ZDdhYjczOTg1AY2ofg==: 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:11.779 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:14.332 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:14.899 nvme0n1 00:15:14.899 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:14.899 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.899 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.899 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.899 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:14.899 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:15.834 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:15.834 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.834 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:16.092 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.092 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:15:16.092 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.092 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.093 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.093 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:16.093 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:16.351 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:16.351 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:16.351 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:16.609 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.176 2024/12/16 11:35:22 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:17.176 request: 00:15:17.176 { 00:15:17.176 "method": "bdev_nvme_set_keys", 00:15:17.176 "params": { 00:15:17.176 "name": "nvme0", 00:15:17.176 "dhchap_key": "key1", 00:15:17.176 "dhchap_ctrlr_key": "key3" 00:15:17.176 } 00:15:17.176 } 00:15:17.176 Got JSON-RPC error response 00:15:17.176 GoRPCClient: error on JSON-RPC call 00:15:17.479 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:17.479 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.479 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.479 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.479 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:17.479 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:17.479 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.737 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:17.737 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:18.672 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:18.672 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:18.672 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:18.931 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.866 nvme0n1 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:19.866 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.802 2024/12/16 11:35:25 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:20.802 request: 00:15:20.802 { 00:15:20.802 "method": "bdev_nvme_set_keys", 00:15:20.802 "params": { 00:15:20.802 "name": "nvme0", 00:15:20.802 "dhchap_key": "key2", 00:15:20.802 "dhchap_ctrlr_key": "key0" 00:15:20.802 } 00:15:20.802 } 00:15:20.802 Got JSON-RPC error response 00:15:20.802 GoRPCClient: error on JSON-RPC call 00:15:20.802 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.802 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.802 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.802 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.802 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:20.802 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:20.802 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.060 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:21.060 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:21.996 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:21.996 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:21.996 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78297 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 78297 ']' 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 78297 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78297 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:22.254 killing process with pid 78297 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78297' 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 78297 00:15:22.254 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 78297 00:15:22.862 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:22.862 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:22.862 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:22.862 rmmod nvme_tcp 00:15:22.862 rmmod nvme_fabrics 00:15:22.862 rmmod nvme_keyring 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 83209 ']' 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 83209 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 83209 ']' 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 83209 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.862 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83209 00:15:22.862 killing process with pid 83209 00:15:22.863 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.863 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.863 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83209' 00:15:22.863 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 83209 00:15:22.863 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 83209 00:15:23.121 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:23.121 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:23.121 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:23.121 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:23.121 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.122 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.lS6 /tmp/spdk.key-sha256.P3L /tmp/spdk.key-sha384.FkJ /tmp/spdk.key-sha512.Rhy /tmp/spdk.key-sha512.wCF /tmp/spdk.key-sha384.dW9 /tmp/spdk.key-sha256.8BS '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:23.381 ************************************ 00:15:23.381 END TEST nvmf_auth_target 00:15:23.381 ************************************ 00:15:23.381 00:15:23.381 real 3m17.202s 00:15:23.381 user 8m0.732s 00:15:23.381 sys 0m24.073s 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.381 ************************************ 00:15:23.381 START TEST nvmf_bdevio_no_huge 00:15:23.381 ************************************ 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:23.381 * Looking for test storage... 00:15:23.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.381 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.382 --rc genhtml_branch_coverage=1 00:15:23.382 --rc genhtml_function_coverage=1 00:15:23.382 --rc genhtml_legend=1 00:15:23.382 --rc geninfo_all_blocks=1 00:15:23.382 --rc geninfo_unexecuted_blocks=1 00:15:23.382 00:15:23.382 ' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.382 --rc genhtml_branch_coverage=1 00:15:23.382 --rc genhtml_function_coverage=1 00:15:23.382 --rc genhtml_legend=1 00:15:23.382 --rc geninfo_all_blocks=1 00:15:23.382 --rc geninfo_unexecuted_blocks=1 00:15:23.382 00:15:23.382 ' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.382 --rc genhtml_branch_coverage=1 00:15:23.382 --rc genhtml_function_coverage=1 00:15:23.382 --rc genhtml_legend=1 00:15:23.382 --rc geninfo_all_blocks=1 00:15:23.382 --rc geninfo_unexecuted_blocks=1 00:15:23.382 00:15:23.382 ' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.382 --rc genhtml_branch_coverage=1 00:15:23.382 --rc genhtml_function_coverage=1 00:15:23.382 --rc genhtml_legend=1 00:15:23.382 --rc geninfo_all_blocks=1 00:15:23.382 --rc geninfo_unexecuted_blocks=1 00:15:23.382 00:15:23.382 ' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.382 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.382 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:23.641 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:23.642 Cannot find device "nvmf_init_br" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:23.642 Cannot find device "nvmf_init_br2" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:23.642 Cannot find device "nvmf_tgt_br" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.642 Cannot find device "nvmf_tgt_br2" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:23.642 Cannot find device "nvmf_init_br" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:23.642 Cannot find device "nvmf_init_br2" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:23.642 Cannot find device "nvmf_tgt_br" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:23.642 Cannot find device "nvmf_tgt_br2" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:23.642 Cannot find device "nvmf_br" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:23.642 Cannot find device "nvmf_init_if" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:23.642 Cannot find device "nvmf_init_if2" 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:23.642 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:23.642 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:23.901 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:23.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:23.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:15:23.902 00:15:23.902 --- 10.0.0.3 ping statistics --- 00:15:23.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.902 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:23.902 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:23.902 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:15:23.902 00:15:23.902 --- 10.0.0.4 ping statistics --- 00:15:23.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.902 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:23.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:23.902 00:15:23.902 --- 10.0.0.1 ping statistics --- 00:15:23.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.902 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:23.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:23.902 00:15:23.902 --- 10.0.0.2 ping statistics --- 00:15:23.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.902 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=84082 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 84082 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 84082 ']' 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.902 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:23.902 [2024-12-16 11:35:29.222972] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:15:23.902 [2024-12-16 11:35:29.223283] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:24.160 [2024-12-16 11:35:29.381748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.160 [2024-12-16 11:35:29.467678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.160 [2024-12-16 11:35:29.468204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.160 [2024-12-16 11:35:29.468725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.160 [2024-12-16 11:35:29.469237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.160 [2024-12-16 11:35:29.469434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.160 [2024-12-16 11:35:29.470531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:15:24.160 [2024-12-16 11:35:29.470648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:15:24.160 [2024-12-16 11:35:29.470783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:15:24.160 [2024-12-16 11:35:29.470791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.095 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.095 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:25.095 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.095 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.096 [2024-12-16 11:35:30.290758] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.096 Malloc0 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.096 [2024-12-16 11:35:30.330946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:25.096 { 00:15:25.096 "params": { 00:15:25.096 "name": "Nvme$subsystem", 00:15:25.096 "trtype": "$TEST_TRANSPORT", 00:15:25.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.096 "adrfam": "ipv4", 00:15:25.096 "trsvcid": "$NVMF_PORT", 00:15:25.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.096 "hdgst": ${hdgst:-false}, 00:15:25.096 "ddgst": ${ddgst:-false} 00:15:25.096 }, 00:15:25.096 "method": "bdev_nvme_attach_controller" 00:15:25.096 } 00:15:25.096 EOF 00:15:25.096 )") 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:25.096 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:25.096 "params": { 00:15:25.096 "name": "Nvme1", 00:15:25.096 "trtype": "tcp", 00:15:25.096 "traddr": "10.0.0.3", 00:15:25.096 "adrfam": "ipv4", 00:15:25.096 "trsvcid": "4420", 00:15:25.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.096 "hdgst": false, 00:15:25.096 "ddgst": false 00:15:25.096 }, 00:15:25.096 "method": "bdev_nvme_attach_controller" 00:15:25.096 }' 00:15:25.096 [2024-12-16 11:35:30.391780] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:15:25.096 [2024-12-16 11:35:30.392478] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84136 ] 00:15:25.354 [2024-12-16 11:35:30.552493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.354 [2024-12-16 11:35:30.638339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.354 [2024-12-16 11:35:30.638519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.354 [2024-12-16 11:35:30.638531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.613 I/O targets: 00:15:25.613 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:25.613 00:15:25.613 00:15:25.613 CUnit - A unit testing framework for C - Version 2.1-3 00:15:25.613 http://cunit.sourceforge.net/ 00:15:25.613 00:15:25.613 00:15:25.613 Suite: bdevio tests on: Nvme1n1 00:15:25.613 Test: blockdev write read block ...passed 00:15:25.613 Test: blockdev write zeroes read block ...passed 00:15:25.613 Test: blockdev write zeroes read no split ...passed 00:15:25.613 Test: blockdev write zeroes read split ...passed 00:15:25.613 Test: blockdev write zeroes read split partial ...passed 00:15:25.613 Test: blockdev reset ...[2024-12-16 11:35:30.992753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:25.613 [2024-12-16 11:35:30.993039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1858720 (9): Bad file descriptor 00:15:25.613 [2024-12-16 11:35:31.012729] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:25.613 passed 00:15:25.613 Test: blockdev write read 8 blocks ...passed 00:15:25.613 Test: blockdev write read size > 128k ...passed 00:15:25.613 Test: blockdev write read invalid size ...passed 00:15:25.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:25.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:25.872 Test: blockdev write read max offset ...passed 00:15:25.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:25.872 Test: blockdev writev readv 8 blocks ...passed 00:15:25.872 Test: blockdev writev readv 30 x 1block ...passed 00:15:25.872 Test: blockdev writev readv block ...passed 00:15:25.872 Test: blockdev writev readv size > 128k ...passed 00:15:25.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:25.872 Test: blockdev comparev and writev ...[2024-12-16 11:35:31.187166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.187352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.187381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.187395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.187702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.187720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.187737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.187747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.188045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.188063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.188080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.188090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.188600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.188631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.188650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.872 [2024-12-16 11:35:31.188660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:25.872 passed 00:15:25.872 Test: blockdev nvme passthru rw ...passed 00:15:25.872 Test: blockdev nvme passthru vendor specific ...[2024-12-16 11:35:31.272305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.872 [2024-12-16 11:35:31.272333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.272454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.872 [2024-12-16 11:35:31.272470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.272575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.872 [2024-12-16 11:35:31.272591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:25.872 [2024-12-16 11:35:31.272701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.872 [2024-12-16 11:35:31.272721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:25.872 passed 00:15:25.872 Test: blockdev nvme admin passthru ...passed 00:15:26.131 Test: blockdev copy ...passed 00:15:26.131 00:15:26.131 Run Summary: Type Total Ran Passed Failed Inactive 00:15:26.131 suites 1 1 n/a 0 0 00:15:26.131 tests 23 23 23 0 0 00:15:26.131 asserts 152 152 152 0 n/a 00:15:26.131 00:15:26.131 Elapsed time = 0.922 seconds 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.389 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.647 rmmod nvme_tcp 00:15:26.647 rmmod nvme_fabrics 00:15:26.647 rmmod nvme_keyring 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 84082 ']' 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 84082 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 84082 ']' 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 84082 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84082 00:15:26.647 killing process with pid 84082 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:26.647 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:26.648 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84082' 00:15:26.648 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 84082 00:15:26.648 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 84082 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:26.941 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:27.199 00:15:27.199 real 0m3.917s 00:15:27.199 user 0m13.481s 00:15:27.199 sys 0m1.464s 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:27.199 ************************************ 00:15:27.199 END TEST nvmf_bdevio_no_huge 00:15:27.199 ************************************ 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.199 ************************************ 00:15:27.199 START TEST nvmf_tls 00:15:27.199 ************************************ 00:15:27.199 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:27.459 * Looking for test storage... 00:15:27.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.459 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:27.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.459 --rc genhtml_branch_coverage=1 00:15:27.459 --rc genhtml_function_coverage=1 00:15:27.459 --rc genhtml_legend=1 00:15:27.459 --rc geninfo_all_blocks=1 00:15:27.460 --rc geninfo_unexecuted_blocks=1 00:15:27.460 00:15:27.460 ' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:27.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.460 --rc genhtml_branch_coverage=1 00:15:27.460 --rc genhtml_function_coverage=1 00:15:27.460 --rc genhtml_legend=1 00:15:27.460 --rc geninfo_all_blocks=1 00:15:27.460 --rc geninfo_unexecuted_blocks=1 00:15:27.460 00:15:27.460 ' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:27.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.460 --rc genhtml_branch_coverage=1 00:15:27.460 --rc genhtml_function_coverage=1 00:15:27.460 --rc genhtml_legend=1 00:15:27.460 --rc geninfo_all_blocks=1 00:15:27.460 --rc geninfo_unexecuted_blocks=1 00:15:27.460 00:15:27.460 ' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:27.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.460 --rc genhtml_branch_coverage=1 00:15:27.460 --rc genhtml_function_coverage=1 00:15:27.460 --rc genhtml_legend=1 00:15:27.460 --rc geninfo_all_blocks=1 00:15:27.460 --rc geninfo_unexecuted_blocks=1 00:15:27.460 00:15:27.460 ' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.460 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.460 Cannot find device "nvmf_init_br" 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.460 Cannot find device "nvmf_init_br2" 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:27.460 Cannot find device "nvmf_tgt_br" 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.460 Cannot find device "nvmf_tgt_br2" 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:27.460 Cannot find device "nvmf_init_br" 00:15:27.460 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:27.461 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:27.461 Cannot find device "nvmf_init_br2" 00:15:27.461 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:27.461 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:27.461 Cannot find device "nvmf_tgt_br" 00:15:27.461 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:27.461 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:27.719 Cannot find device "nvmf_tgt_br2" 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:27.719 Cannot find device "nvmf_br" 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:27.719 Cannot find device "nvmf_init_if" 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:27.719 Cannot find device "nvmf_init_if2" 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.719 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:27.720 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:27.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:27.979 00:15:27.979 --- 10.0.0.3 ping statistics --- 00:15:27.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.979 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:27.979 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:27.979 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:15:27.979 00:15:27.979 --- 10.0.0.4 ping statistics --- 00:15:27.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.979 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:27.979 00:15:27.979 --- 10.0.0.1 ping statistics --- 00:15:27.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.979 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:27.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:15:27.979 00:15:27.979 --- 10.0.0.2 ping statistics --- 00:15:27.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.979 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84379 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84379 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84379 ']' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.979 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.979 [2024-12-16 11:35:33.298356] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:15:27.979 [2024-12-16 11:35:33.298432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.238 [2024-12-16 11:35:33.451076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.238 [2024-12-16 11:35:33.514367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.238 [2024-12-16 11:35:33.514455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.238 [2024-12-16 11:35:33.514494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.238 [2024-12-16 11:35:33.514506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.238 [2024-12-16 11:35:33.514515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.238 [2024-12-16 11:35:33.514963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:29.174 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:29.433 true 00:15:29.433 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:29.433 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:29.692 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:29.692 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:29.692 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:29.949 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:29.949 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.206 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:30.206 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:30.206 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:30.464 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.464 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:30.722 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:30.722 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:30.722 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.722 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:30.982 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:30.982 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:30.982 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:31.248 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.248 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:31.506 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:31.506 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:31.506 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:31.764 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:31.764 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.71z0YlyoAQ 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.4sGuISk101 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.71z0YlyoAQ 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.4sGuISk101 00:15:32.330 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:32.588 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:32.847 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.71z0YlyoAQ 00:15:32.847 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.71z0YlyoAQ 00:15:32.847 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:33.106 [2024-12-16 11:35:38.432676] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.106 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:33.364 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:33.622 [2024-12-16 11:35:38.964803] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.622 [2024-12-16 11:35:38.965084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.622 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:33.880 malloc0 00:15:33.880 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:34.138 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.71z0YlyoAQ 00:15:34.396 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:34.654 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.71z0YlyoAQ 00:15:46.851 Initializing NVMe Controllers 00:15:46.851 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:46.851 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:46.852 Initialization complete. Launching workers. 00:15:46.852 ======================================================== 00:15:46.852 Latency(us) 00:15:46.852 Device Information : IOPS MiB/s Average min max 00:15:46.852 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9694.70 37.87 6602.99 1365.36 8355.92 00:15:46.852 ======================================================== 00:15:46.852 Total : 9694.70 37.87 6602.99 1365.36 8355.92 00:15:46.852 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.71z0YlyoAQ 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.71z0YlyoAQ 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84751 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84751 /var/tmp/bdevperf.sock 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84751 ']' 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.852 [2024-12-16 11:35:50.278330] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:15:46.852 [2024-12-16 11:35:50.278441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84751 ] 00:15:46.852 [2024-12-16 11:35:50.425151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.852 [2024-12-16 11:35:50.480437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.71z0YlyoAQ 00:15:46.852 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:46.852 [2024-12-16 11:35:51.134078] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:46.852 TLSTESTn1 00:15:46.852 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:46.852 Running I/O for 10 seconds... 00:15:48.046 3964.00 IOPS, 15.48 MiB/s [2024-12-16T11:35:54.396Z] 4057.50 IOPS, 15.85 MiB/s [2024-12-16T11:35:55.769Z] 4079.67 IOPS, 15.94 MiB/s [2024-12-16T11:35:56.335Z] 4104.50 IOPS, 16.03 MiB/s [2024-12-16T11:35:57.711Z] 4119.00 IOPS, 16.09 MiB/s [2024-12-16T11:35:58.645Z] 4128.33 IOPS, 16.13 MiB/s [2024-12-16T11:35:59.581Z] 4134.86 IOPS, 16.15 MiB/s [2024-12-16T11:36:00.516Z] 4142.75 IOPS, 16.18 MiB/s [2024-12-16T11:36:01.478Z] 4148.00 IOPS, 16.20 MiB/s [2024-12-16T11:36:01.478Z] 4149.90 IOPS, 16.21 MiB/s 00:15:56.064 Latency(us) 00:15:56.064 [2024-12-16T11:36:01.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.064 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:56.064 Verification LBA range: start 0x0 length 0x2000 00:15:56.064 TLSTESTn1 : 10.02 4155.95 16.23 0.00 0.00 30741.28 6523.81 30146.56 00:15:56.064 [2024-12-16T11:36:01.478Z] =================================================================================================================== 00:15:56.064 [2024-12-16T11:36:01.478Z] Total : 4155.95 16.23 0.00 0.00 30741.28 6523.81 30146.56 00:15:56.064 { 00:15:56.064 "results": [ 00:15:56.064 { 00:15:56.064 "job": "TLSTESTn1", 00:15:56.064 "core_mask": "0x4", 00:15:56.064 "workload": "verify", 00:15:56.064 "status": "finished", 00:15:56.064 "verify_range": { 00:15:56.064 "start": 0, 00:15:56.064 "length": 8192 00:15:56.064 }, 00:15:56.064 "queue_depth": 128, 00:15:56.064 "io_size": 4096, 00:15:56.064 "runtime": 10.01623, 00:15:56.064 "iops": 4155.954885221286, 00:15:56.064 "mibps": 16.234198770395647, 00:15:56.064 "io_failed": 0, 00:15:56.064 "io_timeout": 0, 00:15:56.064 "avg_latency_us": 30741.279123296285, 00:15:56.064 "min_latency_us": 6523.810909090909, 00:15:56.064 "max_latency_us": 30146.56 00:15:56.064 } 00:15:56.064 ], 00:15:56.064 "core_count": 1 00:15:56.064 } 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84751 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84751 ']' 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84751 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84751 00:15:56.064 killing process with pid 84751 00:15:56.064 Received shutdown signal, test time was about 10.000000 seconds 00:15:56.064 00:15:56.064 Latency(us) 00:15:56.064 [2024-12-16T11:36:01.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.064 [2024-12-16T11:36:01.478Z] =================================================================================================================== 00:15:56.064 [2024-12-16T11:36:01.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84751' 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84751 00:15:56.064 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84751 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4sGuISk101 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4sGuISk101 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4sGuISk101 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4sGuISk101 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:56.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84897 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84897 /var/tmp/bdevperf.sock 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84897 ']' 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.323 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.323 [2024-12-16 11:36:01.674070] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:15:56.323 [2024-12-16 11:36:01.674241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84897 ] 00:15:56.581 [2024-12-16 11:36:01.825189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.581 [2024-12-16 11:36:01.908631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.516 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.516 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:57.516 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4sGuISk101 00:15:57.774 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:58.033 [2024-12-16 11:36:03.335774] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:58.033 [2024-12-16 11:36:03.347169] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:58.033 [2024-12-16 11:36:03.347384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24276a0 (107): Transport endpoint is not connected 00:15:58.033 [2024-12-16 11:36:03.348375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24276a0 (9): Bad file descriptor 00:15:58.033 [2024-12-16 11:36:03.349371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:58.033 [2024-12-16 11:36:03.349396] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:58.033 [2024-12-16 11:36:03.349407] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:58.033 [2024-12-16 11:36:03.349423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:58.033 2024/12/16 11:36:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:58.033 request: 00:15:58.033 { 00:15:58.033 "method": "bdev_nvme_attach_controller", 00:15:58.033 "params": { 00:15:58.033 "name": "TLSTEST", 00:15:58.033 "trtype": "tcp", 00:15:58.033 "traddr": "10.0.0.3", 00:15:58.033 "adrfam": "ipv4", 00:15:58.033 "trsvcid": "4420", 00:15:58.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.033 "prchk_reftag": false, 00:15:58.033 "prchk_guard": false, 00:15:58.033 "hdgst": false, 00:15:58.033 "ddgst": false, 00:15:58.033 "psk": "key0", 00:15:58.033 "allow_unrecognized_csi": false 00:15:58.033 } 00:15:58.033 } 00:15:58.033 Got JSON-RPC error response 00:15:58.033 GoRPCClient: error on JSON-RPC call 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84897 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84897 ']' 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84897 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84897 00:15:58.033 killing process with pid 84897 00:15:58.033 Received shutdown signal, test time was about 10.000000 seconds 00:15:58.033 00:15:58.033 Latency(us) 00:15:58.033 [2024-12-16T11:36:03.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.033 [2024-12-16T11:36:03.447Z] =================================================================================================================== 00:15:58.033 [2024-12-16T11:36:03.447Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84897' 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84897 00:15:58.033 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84897 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.71z0YlyoAQ 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.71z0YlyoAQ 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.71z0YlyoAQ 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.71z0YlyoAQ 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84955 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84955 /var/tmp/bdevperf.sock 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84955 ']' 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.292 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.292 [2024-12-16 11:36:03.645841] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:15:58.292 [2024-12-16 11:36:03.645941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84955 ] 00:15:58.551 [2024-12-16 11:36:03.788112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.551 [2024-12-16 11:36:03.837380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.551 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.551 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:58.551 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.71z0YlyoAQ 00:15:59.118 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:59.377 [2024-12-16 11:36:04.544676] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:59.377 [2024-12-16 11:36:04.549839] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:59.377 [2024-12-16 11:36:04.550252] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:59.377 [2024-12-16 11:36:04.550336] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:59.377 [2024-12-16 11:36:04.550559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15886a0 (107): Transport endpoint is not connected 00:15:59.377 [2024-12-16 11:36:04.551546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15886a0 (9): Bad file descriptor 00:15:59.377 [2024-12-16 11:36:04.552543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:59.377 [2024-12-16 11:36:04.552567] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:59.377 [2024-12-16 11:36:04.552577] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:59.377 [2024-12-16 11:36:04.552592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:59.377 2024/12/16 11:36:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:59.377 request: 00:15:59.377 { 00:15:59.377 "method": "bdev_nvme_attach_controller", 00:15:59.377 "params": { 00:15:59.377 "name": "TLSTEST", 00:15:59.377 "trtype": "tcp", 00:15:59.377 "traddr": "10.0.0.3", 00:15:59.377 "adrfam": "ipv4", 00:15:59.377 "trsvcid": "4420", 00:15:59.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.377 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:59.377 "prchk_reftag": false, 00:15:59.377 "prchk_guard": false, 00:15:59.377 "hdgst": false, 00:15:59.377 "ddgst": false, 00:15:59.377 "psk": "key0", 00:15:59.377 "allow_unrecognized_csi": false 00:15:59.377 } 00:15:59.377 } 00:15:59.377 Got JSON-RPC error response 00:15:59.377 GoRPCClient: error on JSON-RPC call 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84955 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84955 ']' 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84955 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84955 00:15:59.377 killing process with pid 84955 00:15:59.377 Received shutdown signal, test time was about 10.000000 seconds 00:15:59.377 00:15:59.377 Latency(us) 00:15:59.377 [2024-12-16T11:36:04.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.377 [2024-12-16T11:36:04.791Z] =================================================================================================================== 00:15:59.377 [2024-12-16T11:36:04.791Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:59.377 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:59.378 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84955' 00:15:59.378 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84955 00:15:59.378 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84955 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.71z0YlyoAQ 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.71z0YlyoAQ 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.71z0YlyoAQ 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.71z0YlyoAQ 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84994 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84994 /var/tmp/bdevperf.sock 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84994 ']' 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.653 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.653 [2024-12-16 11:36:04.851950] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:15:59.653 [2024-12-16 11:36:04.852065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84994 ] 00:15:59.653 [2024-12-16 11:36:04.992445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.653 [2024-12-16 11:36:05.046167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.935 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.935 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:59.935 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.71z0YlyoAQ 00:16:00.194 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:00.453 [2024-12-16 11:36:05.735376] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:00.453 [2024-12-16 11:36:05.740317] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:00.453 [2024-12-16 11:36:05.740359] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:00.453 [2024-12-16 11:36:05.740407] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:00.453 [2024-12-16 11:36:05.741033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9e6a0 (107): Transport endpoint is not connected 00:16:00.453 [2024-12-16 11:36:05.742008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9e6a0 (9): Bad file descriptor 00:16:00.453 [2024-12-16 11:36:05.743005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:00.453 [2024-12-16 11:36:05.743051] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:00.453 [2024-12-16 11:36:05.743062] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:00.453 [2024-12-16 11:36:05.743078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:00.453 2024/12/16 11:36:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:00.453 request: 00:16:00.453 { 00:16:00.453 "method": "bdev_nvme_attach_controller", 00:16:00.453 "params": { 00:16:00.453 "name": "TLSTEST", 00:16:00.453 "trtype": "tcp", 00:16:00.453 "traddr": "10.0.0.3", 00:16:00.453 "adrfam": "ipv4", 00:16:00.453 "trsvcid": "4420", 00:16:00.453 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:00.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:00.453 "prchk_reftag": false, 00:16:00.453 "prchk_guard": false, 00:16:00.453 "hdgst": false, 00:16:00.453 "ddgst": false, 00:16:00.453 "psk": "key0", 00:16:00.453 "allow_unrecognized_csi": false 00:16:00.453 } 00:16:00.453 } 00:16:00.453 Got JSON-RPC error response 00:16:00.453 GoRPCClient: error on JSON-RPC call 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84994 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84994 ']' 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84994 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84994 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:00.453 killing process with pid 84994 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84994' 00:16:00.453 Received shutdown signal, test time was about 10.000000 seconds 00:16:00.453 00:16:00.453 Latency(us) 00:16:00.453 [2024-12-16T11:36:05.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.453 [2024-12-16T11:36:05.867Z] =================================================================================================================== 00:16:00.453 [2024-12-16T11:36:05.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84994 00:16:00.453 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84994 00:16:00.711 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:00.711 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:00.711 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:00.711 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:00.711 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:00.711 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85033 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85033 /var/tmp/bdevperf.sock 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85033 ']' 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.711 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.712 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.712 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.712 [2024-12-16 11:36:06.056058] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:00.712 [2024-12-16 11:36:06.056158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85033 ] 00:16:00.970 [2024-12-16 11:36:06.200018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.970 [2024-12-16 11:36:06.254868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.904 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.904 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:01.904 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:02.163 [2024-12-16 11:36:07.371844] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:02.163 [2024-12-16 11:36:07.371908] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:02.163 2024/12/16 11:36:07 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:02.163 request: 00:16:02.163 { 00:16:02.163 "method": "keyring_file_add_key", 00:16:02.163 "params": { 00:16:02.163 "name": "key0", 00:16:02.163 "path": "" 00:16:02.163 } 00:16:02.163 } 00:16:02.163 Got JSON-RPC error response 00:16:02.163 GoRPCClient: error on JSON-RPC call 00:16:02.163 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:02.422 [2024-12-16 11:36:07.664033] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:02.422 [2024-12-16 11:36:07.664104] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:02.422 2024/12/16 11:36:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:02.422 request: 00:16:02.422 { 00:16:02.422 "method": "bdev_nvme_attach_controller", 00:16:02.422 "params": { 00:16:02.422 "name": "TLSTEST", 00:16:02.422 "trtype": "tcp", 00:16:02.422 "traddr": "10.0.0.3", 00:16:02.422 "adrfam": "ipv4", 00:16:02.422 "trsvcid": "4420", 00:16:02.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:02.422 "prchk_reftag": false, 00:16:02.422 "prchk_guard": false, 00:16:02.422 "hdgst": false, 00:16:02.422 "ddgst": false, 00:16:02.422 "psk": "key0", 00:16:02.422 "allow_unrecognized_csi": false 00:16:02.422 } 00:16:02.422 } 00:16:02.422 Got JSON-RPC error response 00:16:02.422 GoRPCClient: error on JSON-RPC call 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85033 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85033 ']' 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85033 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85033 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:02.422 killing process with pid 85033 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85033' 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85033 00:16:02.422 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.422 00:16:02.422 Latency(us) 00:16:02.422 [2024-12-16T11:36:07.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.422 [2024-12-16T11:36:07.836Z] =================================================================================================================== 00:16:02.422 [2024-12-16T11:36:07.836Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:02.422 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85033 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84379 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84379 ']' 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84379 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84379 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:02.681 killing process with pid 84379 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84379' 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84379 00:16:02.681 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84379 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.0XhZFTuKQV 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.0XhZFTuKQV 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85102 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85102 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85102 ']' 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.939 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.940 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.940 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.940 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.940 [2024-12-16 11:36:08.249416] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:02.940 [2024-12-16 11:36:08.249520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.198 [2024-12-16 11:36:08.393928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.198 [2024-12-16 11:36:08.452483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.198 [2024-12-16 11:36:08.452536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.198 [2024-12-16 11:36:08.452563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.198 [2024-12-16 11:36:08.452572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.198 [2024-12-16 11:36:08.452579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.198 [2024-12-16 11:36:08.452976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.198 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.198 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:03.198 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.198 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.198 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.457 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.457 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.0XhZFTuKQV 00:16:03.457 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0XhZFTuKQV 00:16:03.457 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:03.457 [2024-12-16 11:36:08.858361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.715 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:03.974 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:04.232 [2024-12-16 11:36:09.462491] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:04.232 [2024-12-16 11:36:09.462712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.232 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:04.494 malloc0 00:16:04.494 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:04.752 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:05.010 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XhZFTuKQV 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0XhZFTuKQV 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85204 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85204 /var/tmp/bdevperf.sock 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85204 ']' 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.269 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.269 [2024-12-16 11:36:10.680290] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:05.269 [2024-12-16 11:36:10.680378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85204 ] 00:16:05.528 [2024-12-16 11:36:10.828393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.528 [2024-12-16 11:36:10.892069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.463 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.463 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:06.463 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:06.721 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:06.980 [2024-12-16 11:36:12.166606] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:06.980 TLSTESTn1 00:16:06.980 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:06.980 Running I/O for 10 seconds... 00:16:09.291 4074.00 IOPS, 15.91 MiB/s [2024-12-16T11:36:15.637Z] 4089.50 IOPS, 15.97 MiB/s [2024-12-16T11:36:16.573Z] 4078.67 IOPS, 15.93 MiB/s [2024-12-16T11:36:17.510Z] 4099.75 IOPS, 16.01 MiB/s [2024-12-16T11:36:18.445Z] 4118.20 IOPS, 16.09 MiB/s [2024-12-16T11:36:19.380Z] 4136.17 IOPS, 16.16 MiB/s [2024-12-16T11:36:20.811Z] 4151.29 IOPS, 16.22 MiB/s [2024-12-16T11:36:21.378Z] 4156.25 IOPS, 16.24 MiB/s [2024-12-16T11:36:22.754Z] 4164.44 IOPS, 16.27 MiB/s [2024-12-16T11:36:22.754Z] 4167.50 IOPS, 16.28 MiB/s 00:16:17.340 Latency(us) 00:16:17.340 [2024-12-16T11:36:22.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.340 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:17.340 Verification LBA range: start 0x0 length 0x2000 00:16:17.340 TLSTESTn1 : 10.02 4173.53 16.30 0.00 0.00 30612.94 5898.24 24427.05 00:16:17.340 [2024-12-16T11:36:22.754Z] =================================================================================================================== 00:16:17.340 [2024-12-16T11:36:22.754Z] Total : 4173.53 16.30 0.00 0.00 30612.94 5898.24 24427.05 00:16:17.340 { 00:16:17.340 "results": [ 00:16:17.340 { 00:16:17.340 "job": "TLSTESTn1", 00:16:17.340 "core_mask": "0x4", 00:16:17.340 "workload": "verify", 00:16:17.340 "status": "finished", 00:16:17.340 "verify_range": { 00:16:17.340 "start": 0, 00:16:17.340 "length": 8192 00:16:17.340 }, 00:16:17.340 "queue_depth": 128, 00:16:17.340 "io_size": 4096, 00:16:17.340 "runtime": 10.015749, 00:16:17.340 "iops": 4173.527112151073, 00:16:17.340 "mibps": 16.30284028184013, 00:16:17.340 "io_failed": 0, 00:16:17.340 "io_timeout": 0, 00:16:17.340 "avg_latency_us": 30612.941958152376, 00:16:17.340 "min_latency_us": 5898.24, 00:16:17.340 "max_latency_us": 24427.054545454546 00:16:17.340 } 00:16:17.340 ], 00:16:17.340 "core_count": 1 00:16:17.340 } 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85204 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85204 ']' 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85204 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85204 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:17.340 killing process with pid 85204 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85204' 00:16:17.340 Received shutdown signal, test time was about 10.000000 seconds 00:16:17.340 00:16:17.340 Latency(us) 00:16:17.340 [2024-12-16T11:36:22.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.340 [2024-12-16T11:36:22.754Z] =================================================================================================================== 00:16:17.340 [2024-12-16T11:36:22.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85204 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85204 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.0XhZFTuKQV 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XhZFTuKQV 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XhZFTuKQV 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0XhZFTuKQV 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0XhZFTuKQV 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85358 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85358 /var/tmp/bdevperf.sock 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85358 ']' 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.340 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.341 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.341 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.341 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.341 [2024-12-16 11:36:22.696919] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:17.341 [2024-12-16 11:36:22.697062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85358 ] 00:16:17.599 [2024-12-16 11:36:22.845317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.599 [2024-12-16 11:36:22.900597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.857 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.857 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:17.857 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:18.115 [2024-12-16 11:36:23.285561] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0XhZFTuKQV': 0100666 00:16:18.115 [2024-12-16 11:36:23.285652] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:18.115 2024/12/16 11:36:23 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.0XhZFTuKQV], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:18.115 request: 00:16:18.115 { 00:16:18.115 "method": "keyring_file_add_key", 00:16:18.115 "params": { 00:16:18.115 "name": "key0", 00:16:18.115 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:18.115 } 00:16:18.115 } 00:16:18.115 Got JSON-RPC error response 00:16:18.115 GoRPCClient: error on JSON-RPC call 00:16:18.115 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:18.373 [2024-12-16 11:36:23.561757] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.373 [2024-12-16 11:36:23.561825] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:18.373 2024/12/16 11:36:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:18.373 request: 00:16:18.373 { 00:16:18.373 "method": "bdev_nvme_attach_controller", 00:16:18.373 "params": { 00:16:18.373 "name": "TLSTEST", 00:16:18.373 "trtype": "tcp", 00:16:18.373 "traddr": "10.0.0.3", 00:16:18.373 "adrfam": "ipv4", 00:16:18.373 "trsvcid": "4420", 00:16:18.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.373 "prchk_reftag": false, 00:16:18.373 "prchk_guard": false, 00:16:18.373 "hdgst": false, 00:16:18.373 "ddgst": false, 00:16:18.373 "psk": "key0", 00:16:18.373 "allow_unrecognized_csi": false 00:16:18.373 } 00:16:18.373 } 00:16:18.373 Got JSON-RPC error response 00:16:18.373 GoRPCClient: error on JSON-RPC call 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85358 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85358 ']' 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85358 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85358 00:16:18.373 killing process with pid 85358 00:16:18.373 Received shutdown signal, test time was about 10.000000 seconds 00:16:18.373 00:16:18.373 Latency(us) 00:16:18.373 [2024-12-16T11:36:23.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.373 [2024-12-16T11:36:23.787Z] =================================================================================================================== 00:16:18.373 [2024-12-16T11:36:23.787Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85358' 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85358 00:16:18.373 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85358 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 85102 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85102 ']' 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85102 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85102 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:18.632 killing process with pid 85102 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85102' 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85102 00:16:18.632 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85102 00:16:18.632 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:18.632 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:18.632 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.632 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85408 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85408 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85408 ']' 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.891 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.891 [2024-12-16 11:36:24.116035] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:18.891 [2024-12-16 11:36:24.116159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.891 [2024-12-16 11:36:24.261361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.211 [2024-12-16 11:36:24.320163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.211 [2024-12-16 11:36:24.320406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.211 [2024-12-16 11:36:24.320521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.211 [2024-12-16 11:36:24.320603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.211 [2024-12-16 11:36:24.320669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.211 [2024-12-16 11:36:24.321135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.0XhZFTuKQV 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0XhZFTuKQV 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.0XhZFTuKQV 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0XhZFTuKQV 00:16:19.794 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:20.052 [2024-12-16 11:36:25.418221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.052 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:20.618 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:20.618 [2024-12-16 11:36:26.026396] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:20.618 [2024-12-16 11:36:26.026652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:20.877 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:21.136 malloc0 00:16:21.136 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:21.394 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:21.652 [2024-12-16 11:36:26.893813] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0XhZFTuKQV': 0100666 00:16:21.652 [2024-12-16 11:36:26.893866] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:21.652 2024/12/16 11:36:26 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.0XhZFTuKQV], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:21.652 request: 00:16:21.652 { 00:16:21.652 "method": "keyring_file_add_key", 00:16:21.652 "params": { 00:16:21.652 "name": "key0", 00:16:21.652 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:21.652 } 00:16:21.652 } 00:16:21.652 Got JSON-RPC error response 00:16:21.652 GoRPCClient: error on JSON-RPC call 00:16:21.652 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:21.910 [2024-12-16 11:36:27.149894] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:21.910 [2024-12-16 11:36:27.149990] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:21.910 2024/12/16 11:36:27 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:21.910 request: 00:16:21.910 { 00:16:21.910 "method": "nvmf_subsystem_add_host", 00:16:21.910 "params": { 00:16:21.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.910 "host": "nqn.2016-06.io.spdk:host1", 00:16:21.910 "psk": "key0" 00:16:21.910 } 00:16:21.910 } 00:16:21.911 Got JSON-RPC error response 00:16:21.911 GoRPCClient: error on JSON-RPC call 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 85408 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85408 ']' 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85408 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85408 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:21.911 killing process with pid 85408 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85408' 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85408 00:16:21.911 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85408 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.0XhZFTuKQV 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85532 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85532 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85532 ']' 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.169 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.169 [2024-12-16 11:36:27.487508] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:22.169 [2024-12-16 11:36:27.487637] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.427 [2024-12-16 11:36:27.631827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.427 [2024-12-16 11:36:27.682369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.427 [2024-12-16 11:36:27.682487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.427 [2024-12-16 11:36:27.682500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.427 [2024-12-16 11:36:27.682509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.427 [2024-12-16 11:36:27.682516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.427 [2024-12-16 11:36:27.682913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.427 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.427 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:22.427 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.427 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.427 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.686 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.686 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.0XhZFTuKQV 00:16:22.686 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0XhZFTuKQV 00:16:22.686 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:22.944 [2024-12-16 11:36:28.122125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.944 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:23.202 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:23.527 [2024-12-16 11:36:28.670231] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:23.527 [2024-12-16 11:36:28.670498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:23.527 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:23.819 malloc0 00:16:23.819 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.077 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:24.336 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=85628 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 85628 /var/tmp/bdevperf.sock 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85628 ']' 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.594 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.594 [2024-12-16 11:36:29.854333] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:24.594 [2024-12-16 11:36:29.854462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85628 ] 00:16:24.594 [2024-12-16 11:36:30.001966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.853 [2024-12-16 11:36:30.072565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.419 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.420 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:25.420 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:25.987 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.987 [2024-12-16 11:36:31.326219] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:25.987 TLSTESTn1 00:16:26.246 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:26.505 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:26.505 "subsystems": [ 00:16:26.505 { 00:16:26.505 "subsystem": "keyring", 00:16:26.505 "config": [ 00:16:26.505 { 00:16:26.505 "method": "keyring_file_add_key", 00:16:26.505 "params": { 00:16:26.505 "name": "key0", 00:16:26.505 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:26.505 } 00:16:26.505 } 00:16:26.505 ] 00:16:26.505 }, 00:16:26.505 { 00:16:26.505 "subsystem": "iobuf", 00:16:26.505 "config": [ 00:16:26.505 { 00:16:26.505 "method": "iobuf_set_options", 00:16:26.505 "params": { 00:16:26.505 "enable_numa": false, 00:16:26.505 "large_bufsize": 135168, 00:16:26.505 "large_pool_count": 1024, 00:16:26.505 "small_bufsize": 8192, 00:16:26.505 "small_pool_count": 8192 00:16:26.505 } 00:16:26.505 } 00:16:26.505 ] 00:16:26.505 }, 00:16:26.506 { 00:16:26.506 "subsystem": "sock", 00:16:26.506 "config": [ 00:16:26.506 { 00:16:26.506 "method": "sock_set_default_impl", 00:16:26.506 "params": { 00:16:26.506 "impl_name": "posix" 00:16:26.506 } 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "method": "sock_impl_set_options", 00:16:26.506 "params": { 00:16:26.506 "enable_ktls": false, 00:16:26.506 "enable_placement_id": 0, 00:16:26.506 "enable_quickack": false, 00:16:26.506 "enable_recv_pipe": true, 00:16:26.506 "enable_zerocopy_send_client": false, 00:16:26.506 "enable_zerocopy_send_server": true, 00:16:26.506 "impl_name": "ssl", 00:16:26.506 "recv_buf_size": 4096, 00:16:26.506 "send_buf_size": 4096, 00:16:26.506 "tls_version": 0, 00:16:26.506 "zerocopy_threshold": 0 00:16:26.506 } 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "method": "sock_impl_set_options", 00:16:26.506 "params": { 00:16:26.506 "enable_ktls": false, 00:16:26.506 "enable_placement_id": 0, 00:16:26.506 "enable_quickack": false, 00:16:26.506 "enable_recv_pipe": true, 00:16:26.506 "enable_zerocopy_send_client": false, 00:16:26.506 "enable_zerocopy_send_server": true, 00:16:26.506 "impl_name": "posix", 00:16:26.506 "recv_buf_size": 2097152, 00:16:26.506 "send_buf_size": 2097152, 00:16:26.506 "tls_version": 0, 00:16:26.506 "zerocopy_threshold": 0 00:16:26.506 } 00:16:26.506 } 00:16:26.506 ] 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "subsystem": "vmd", 00:16:26.506 "config": [] 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "subsystem": "accel", 00:16:26.506 "config": [ 00:16:26.506 { 00:16:26.506 "method": "accel_set_options", 00:16:26.506 "params": { 00:16:26.506 "buf_count": 2048, 00:16:26.506 "large_cache_size": 16, 00:16:26.506 "sequence_count": 2048, 00:16:26.506 "small_cache_size": 128, 00:16:26.506 "task_count": 2048 00:16:26.506 } 00:16:26.506 } 00:16:26.506 ] 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "subsystem": "bdev", 00:16:26.506 "config": [ 00:16:26.506 { 00:16:26.506 "method": "bdev_set_options", 00:16:26.506 "params": { 00:16:26.506 "bdev_auto_examine": true, 00:16:26.506 "bdev_io_cache_size": 256, 00:16:26.506 "bdev_io_pool_size": 65535, 00:16:26.506 "iobuf_large_cache_size": 16, 00:16:26.506 "iobuf_small_cache_size": 128 00:16:26.506 } 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "method": "bdev_raid_set_options", 00:16:26.506 "params": { 00:16:26.506 "process_max_bandwidth_mb_sec": 0, 00:16:26.506 "process_window_size_kb": 1024 00:16:26.506 } 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "method": "bdev_iscsi_set_options", 00:16:26.506 "params": { 00:16:26.506 "timeout_sec": 30 00:16:26.506 } 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "method": "bdev_nvme_set_options", 00:16:26.506 "params": { 00:16:26.506 "action_on_timeout": "none", 00:16:26.506 "allow_accel_sequence": false, 00:16:26.506 "arbitration_burst": 0, 00:16:26.506 "bdev_retry_count": 3, 00:16:26.506 "ctrlr_loss_timeout_sec": 0, 00:16:26.506 "delay_cmd_submit": true, 00:16:26.506 "dhchap_dhgroups": [ 00:16:26.506 "null", 00:16:26.506 "ffdhe2048", 00:16:26.506 "ffdhe3072", 00:16:26.506 "ffdhe4096", 00:16:26.506 "ffdhe6144", 00:16:26.506 "ffdhe8192" 00:16:26.506 ], 00:16:26.506 "dhchap_digests": [ 00:16:26.506 "sha256", 00:16:26.506 "sha384", 00:16:26.506 "sha512" 00:16:26.506 ], 00:16:26.506 "disable_auto_failback": false, 00:16:26.506 "fast_io_fail_timeout_sec": 0, 00:16:26.506 "generate_uuids": false, 00:16:26.506 "high_priority_weight": 0, 00:16:26.506 "io_path_stat": false, 00:16:26.506 "io_queue_requests": 0, 00:16:26.506 "keep_alive_timeout_ms": 10000, 00:16:26.506 "low_priority_weight": 0, 00:16:26.506 "medium_priority_weight": 0, 00:16:26.506 "nvme_adminq_poll_period_us": 10000, 00:16:26.506 "nvme_error_stat": false, 00:16:26.506 "nvme_ioq_poll_period_us": 0, 00:16:26.506 "rdma_cm_event_timeout_ms": 0, 00:16:26.506 "rdma_max_cq_size": 0, 00:16:26.506 "rdma_srq_size": 0, 00:16:26.506 "rdma_umr_per_io": false, 00:16:26.506 "reconnect_delay_sec": 0, 00:16:26.506 "timeout_admin_us": 0, 00:16:26.506 "timeout_us": 0, 00:16:26.506 "transport_ack_timeout": 0, 00:16:26.506 "transport_retry_count": 4, 00:16:26.506 "transport_tos": 0 00:16:26.506 } 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "method": "bdev_nvme_set_hotplug", 00:16:26.506 "params": { 00:16:26.506 "enable": false, 00:16:26.506 "period_us": 100000 00:16:26.506 } 00:16:26.506 }, 00:16:26.506 { 00:16:26.506 "method": "bdev_malloc_create", 00:16:26.506 "params": { 00:16:26.506 "block_size": 4096, 00:16:26.506 "dif_is_head_of_md": false, 00:16:26.506 "dif_pi_format": 0, 00:16:26.506 "dif_type": 0, 00:16:26.506 "md_size": 0, 00:16:26.506 "name": "malloc0", 00:16:26.506 "num_blocks": 8192, 00:16:26.506 "optimal_io_boundary": 0, 00:16:26.507 "physical_block_size": 4096, 00:16:26.507 "uuid": "d0ab00ee-1677-476b-a704-26d59b550ac1" 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "bdev_wait_for_examine" 00:16:26.507 } 00:16:26.507 ] 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "subsystem": "nbd", 00:16:26.507 "config": [] 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "subsystem": "scheduler", 00:16:26.507 "config": [ 00:16:26.507 { 00:16:26.507 "method": "framework_set_scheduler", 00:16:26.507 "params": { 00:16:26.507 "name": "static" 00:16:26.507 } 00:16:26.507 } 00:16:26.507 ] 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "subsystem": "nvmf", 00:16:26.507 "config": [ 00:16:26.507 { 00:16:26.507 "method": "nvmf_set_config", 00:16:26.507 "params": { 00:16:26.507 "admin_cmd_passthru": { 00:16:26.507 "identify_ctrlr": false 00:16:26.507 }, 00:16:26.507 "dhchap_dhgroups": [ 00:16:26.507 "null", 00:16:26.507 "ffdhe2048", 00:16:26.507 "ffdhe3072", 00:16:26.507 "ffdhe4096", 00:16:26.507 "ffdhe6144", 00:16:26.507 "ffdhe8192" 00:16:26.507 ], 00:16:26.507 "dhchap_digests": [ 00:16:26.507 "sha256", 00:16:26.507 "sha384", 00:16:26.507 "sha512" 00:16:26.507 ], 00:16:26.507 "discovery_filter": "match_any" 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "nvmf_set_max_subsystems", 00:16:26.507 "params": { 00:16:26.507 "max_subsystems": 1024 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "nvmf_set_crdt", 00:16:26.507 "params": { 00:16:26.507 "crdt1": 0, 00:16:26.507 "crdt2": 0, 00:16:26.507 "crdt3": 0 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "nvmf_create_transport", 00:16:26.507 "params": { 00:16:26.507 "abort_timeout_sec": 1, 00:16:26.507 "ack_timeout": 0, 00:16:26.507 "buf_cache_size": 4294967295, 00:16:26.507 "c2h_success": false, 00:16:26.507 "data_wr_pool_size": 0, 00:16:26.507 "dif_insert_or_strip": false, 00:16:26.507 "in_capsule_data_size": 4096, 00:16:26.507 "io_unit_size": 131072, 00:16:26.507 "max_aq_depth": 128, 00:16:26.507 "max_io_qpairs_per_ctrlr": 127, 00:16:26.507 "max_io_size": 131072, 00:16:26.507 "max_queue_depth": 128, 00:16:26.507 "num_shared_buffers": 511, 00:16:26.507 "sock_priority": 0, 00:16:26.507 "trtype": "TCP", 00:16:26.507 "zcopy": false 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "nvmf_create_subsystem", 00:16:26.507 "params": { 00:16:26.507 "allow_any_host": false, 00:16:26.507 "ana_reporting": false, 00:16:26.507 "max_cntlid": 65519, 00:16:26.507 "max_namespaces": 10, 00:16:26.507 "min_cntlid": 1, 00:16:26.507 "model_number": "SPDK bdev Controller", 00:16:26.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.507 "serial_number": "SPDK00000000000001" 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "nvmf_subsystem_add_host", 00:16:26.507 "params": { 00:16:26.507 "host": "nqn.2016-06.io.spdk:host1", 00:16:26.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.507 "psk": "key0" 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "nvmf_subsystem_add_ns", 00:16:26.507 "params": { 00:16:26.507 "namespace": { 00:16:26.507 "bdev_name": "malloc0", 00:16:26.507 "nguid": "D0AB00EE1677476BA70426D59B550AC1", 00:16:26.507 "no_auto_visible": false, 00:16:26.507 "nsid": 1, 00:16:26.507 "uuid": "d0ab00ee-1677-476b-a704-26d59b550ac1" 00:16:26.507 }, 00:16:26.507 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:26.507 } 00:16:26.507 }, 00:16:26.507 { 00:16:26.507 "method": "nvmf_subsystem_add_listener", 00:16:26.507 "params": { 00:16:26.507 "listen_address": { 00:16:26.507 "adrfam": "IPv4", 00:16:26.507 "traddr": "10.0.0.3", 00:16:26.507 "trsvcid": "4420", 00:16:26.507 "trtype": "TCP" 00:16:26.507 }, 00:16:26.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.507 "secure_channel": true 00:16:26.507 } 00:16:26.507 } 00:16:26.507 ] 00:16:26.507 } 00:16:26.507 ] 00:16:26.507 }' 00:16:26.507 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:26.766 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:26.766 "subsystems": [ 00:16:26.766 { 00:16:26.767 "subsystem": "keyring", 00:16:26.767 "config": [ 00:16:26.767 { 00:16:26.767 "method": "keyring_file_add_key", 00:16:26.767 "params": { 00:16:26.767 "name": "key0", 00:16:26.767 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:26.767 } 00:16:26.767 } 00:16:26.767 ] 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "subsystem": "iobuf", 00:16:26.767 "config": [ 00:16:26.767 { 00:16:26.767 "method": "iobuf_set_options", 00:16:26.767 "params": { 00:16:26.767 "enable_numa": false, 00:16:26.767 "large_bufsize": 135168, 00:16:26.767 "large_pool_count": 1024, 00:16:26.767 "small_bufsize": 8192, 00:16:26.767 "small_pool_count": 8192 00:16:26.767 } 00:16:26.767 } 00:16:26.767 ] 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "subsystem": "sock", 00:16:26.767 "config": [ 00:16:26.767 { 00:16:26.767 "method": "sock_set_default_impl", 00:16:26.767 "params": { 00:16:26.767 "impl_name": "posix" 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "sock_impl_set_options", 00:16:26.767 "params": { 00:16:26.767 "enable_ktls": false, 00:16:26.767 "enable_placement_id": 0, 00:16:26.767 "enable_quickack": false, 00:16:26.767 "enable_recv_pipe": true, 00:16:26.767 "enable_zerocopy_send_client": false, 00:16:26.767 "enable_zerocopy_send_server": true, 00:16:26.767 "impl_name": "ssl", 00:16:26.767 "recv_buf_size": 4096, 00:16:26.767 "send_buf_size": 4096, 00:16:26.767 "tls_version": 0, 00:16:26.767 "zerocopy_threshold": 0 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "sock_impl_set_options", 00:16:26.767 "params": { 00:16:26.767 "enable_ktls": false, 00:16:26.767 "enable_placement_id": 0, 00:16:26.767 "enable_quickack": false, 00:16:26.767 "enable_recv_pipe": true, 00:16:26.767 "enable_zerocopy_send_client": false, 00:16:26.767 "enable_zerocopy_send_server": true, 00:16:26.767 "impl_name": "posix", 00:16:26.767 "recv_buf_size": 2097152, 00:16:26.767 "send_buf_size": 2097152, 00:16:26.767 "tls_version": 0, 00:16:26.767 "zerocopy_threshold": 0 00:16:26.767 } 00:16:26.767 } 00:16:26.767 ] 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "subsystem": "vmd", 00:16:26.767 "config": [] 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "subsystem": "accel", 00:16:26.767 "config": [ 00:16:26.767 { 00:16:26.767 "method": "accel_set_options", 00:16:26.767 "params": { 00:16:26.767 "buf_count": 2048, 00:16:26.767 "large_cache_size": 16, 00:16:26.767 "sequence_count": 2048, 00:16:26.767 "small_cache_size": 128, 00:16:26.767 "task_count": 2048 00:16:26.767 } 00:16:26.767 } 00:16:26.767 ] 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "subsystem": "bdev", 00:16:26.767 "config": [ 00:16:26.767 { 00:16:26.767 "method": "bdev_set_options", 00:16:26.767 "params": { 00:16:26.767 "bdev_auto_examine": true, 00:16:26.767 "bdev_io_cache_size": 256, 00:16:26.767 "bdev_io_pool_size": 65535, 00:16:26.767 "iobuf_large_cache_size": 16, 00:16:26.767 "iobuf_small_cache_size": 128 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "bdev_raid_set_options", 00:16:26.767 "params": { 00:16:26.767 "process_max_bandwidth_mb_sec": 0, 00:16:26.767 "process_window_size_kb": 1024 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "bdev_iscsi_set_options", 00:16:26.767 "params": { 00:16:26.767 "timeout_sec": 30 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "bdev_nvme_set_options", 00:16:26.767 "params": { 00:16:26.767 "action_on_timeout": "none", 00:16:26.767 "allow_accel_sequence": false, 00:16:26.767 "arbitration_burst": 0, 00:16:26.767 "bdev_retry_count": 3, 00:16:26.767 "ctrlr_loss_timeout_sec": 0, 00:16:26.767 "delay_cmd_submit": true, 00:16:26.767 "dhchap_dhgroups": [ 00:16:26.767 "null", 00:16:26.767 "ffdhe2048", 00:16:26.767 "ffdhe3072", 00:16:26.767 "ffdhe4096", 00:16:26.767 "ffdhe6144", 00:16:26.767 "ffdhe8192" 00:16:26.767 ], 00:16:26.767 "dhchap_digests": [ 00:16:26.767 "sha256", 00:16:26.767 "sha384", 00:16:26.767 "sha512" 00:16:26.767 ], 00:16:26.767 "disable_auto_failback": false, 00:16:26.767 "fast_io_fail_timeout_sec": 0, 00:16:26.767 "generate_uuids": false, 00:16:26.767 "high_priority_weight": 0, 00:16:26.767 "io_path_stat": false, 00:16:26.767 "io_queue_requests": 512, 00:16:26.767 "keep_alive_timeout_ms": 10000, 00:16:26.767 "low_priority_weight": 0, 00:16:26.767 "medium_priority_weight": 0, 00:16:26.767 "nvme_adminq_poll_period_us": 10000, 00:16:26.767 "nvme_error_stat": false, 00:16:26.767 "nvme_ioq_poll_period_us": 0, 00:16:26.767 "rdma_cm_event_timeout_ms": 0, 00:16:26.767 "rdma_max_cq_size": 0, 00:16:26.767 "rdma_srq_size": 0, 00:16:26.767 "rdma_umr_per_io": false, 00:16:26.767 "reconnect_delay_sec": 0, 00:16:26.767 "timeout_admin_us": 0, 00:16:26.767 "timeout_us": 0, 00:16:26.767 "transport_ack_timeout": 0, 00:16:26.767 "transport_retry_count": 4, 00:16:26.767 "transport_tos": 0 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "bdev_nvme_attach_controller", 00:16:26.767 "params": { 00:16:26.767 "adrfam": "IPv4", 00:16:26.767 "ctrlr_loss_timeout_sec": 0, 00:16:26.767 "ddgst": false, 00:16:26.767 "fast_io_fail_timeout_sec": 0, 00:16:26.767 "hdgst": false, 00:16:26.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.767 "multipath": "multipath", 00:16:26.767 "name": "TLSTEST", 00:16:26.767 "prchk_guard": false, 00:16:26.767 "prchk_reftag": false, 00:16:26.767 "psk": "key0", 00:16:26.767 "reconnect_delay_sec": 0, 00:16:26.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.767 "traddr": "10.0.0.3", 00:16:26.767 "trsvcid": "4420", 00:16:26.767 "trtype": "TCP" 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "bdev_nvme_set_hotplug", 00:16:26.767 "params": { 00:16:26.767 "enable": false, 00:16:26.767 "period_us": 100000 00:16:26.767 } 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "method": "bdev_wait_for_examine" 00:16:26.767 } 00:16:26.767 ] 00:16:26.767 }, 00:16:26.767 { 00:16:26.767 "subsystem": "nbd", 00:16:26.767 "config": [] 00:16:26.767 } 00:16:26.767 ] 00:16:26.767 }' 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 85628 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85628 ']' 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85628 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85628 00:16:26.767 killing process with pid 85628 00:16:26.767 Received shutdown signal, test time was about 10.000000 seconds 00:16:26.767 00:16:26.767 Latency(us) 00:16:26.767 [2024-12-16T11:36:32.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.767 [2024-12-16T11:36:32.181Z] =================================================================================================================== 00:16:26.767 [2024-12-16T11:36:32.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85628' 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85628 00:16:26.767 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85628 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 85532 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85532 ']' 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85532 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85532 00:16:27.026 killing process with pid 85532 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85532' 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85532 00:16:27.026 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85532 00:16:27.286 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:27.286 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.286 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.286 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:27.286 "subsystems": [ 00:16:27.286 { 00:16:27.286 "subsystem": "keyring", 00:16:27.286 "config": [ 00:16:27.286 { 00:16:27.286 "method": "keyring_file_add_key", 00:16:27.286 "params": { 00:16:27.286 "name": "key0", 00:16:27.286 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:27.286 } 00:16:27.286 } 00:16:27.286 ] 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "subsystem": "iobuf", 00:16:27.286 "config": [ 00:16:27.286 { 00:16:27.286 "method": "iobuf_set_options", 00:16:27.286 "params": { 00:16:27.286 "enable_numa": false, 00:16:27.286 "large_bufsize": 135168, 00:16:27.286 "large_pool_count": 1024, 00:16:27.286 "small_bufsize": 8192, 00:16:27.286 "small_pool_count": 8192 00:16:27.286 } 00:16:27.286 } 00:16:27.286 ] 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "subsystem": "sock", 00:16:27.286 "config": [ 00:16:27.286 { 00:16:27.286 "method": "sock_set_default_impl", 00:16:27.286 "params": { 00:16:27.286 "impl_name": "posix" 00:16:27.286 } 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "method": "sock_impl_set_options", 00:16:27.286 "params": { 00:16:27.286 "enable_ktls": false, 00:16:27.286 "enable_placement_id": 0, 00:16:27.286 "enable_quickack": false, 00:16:27.286 "enable_recv_pipe": true, 00:16:27.286 "enable_zerocopy_send_client": false, 00:16:27.286 "enable_zerocopy_send_server": true, 00:16:27.286 "impl_name": "ssl", 00:16:27.286 "recv_buf_size": 4096, 00:16:27.286 "send_buf_size": 4096, 00:16:27.286 "tls_version": 0, 00:16:27.286 "zerocopy_threshold": 0 00:16:27.286 } 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "method": "sock_impl_set_options", 00:16:27.286 "params": { 00:16:27.286 "enable_ktls": false, 00:16:27.286 "enable_placement_id": 0, 00:16:27.286 "enable_quickack": false, 00:16:27.286 "enable_recv_pipe": true, 00:16:27.286 "enable_zerocopy_send_client": false, 00:16:27.286 "enable_zerocopy_send_server": true, 00:16:27.286 "impl_name": "posix", 00:16:27.286 "recv_buf_size": 2097152, 00:16:27.286 "send_buf_size": 2097152, 00:16:27.286 "tls_version": 0, 00:16:27.286 "zerocopy_threshold": 0 00:16:27.286 } 00:16:27.286 } 00:16:27.286 ] 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "subsystem": "vmd", 00:16:27.286 "config": [] 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "subsystem": "accel", 00:16:27.286 "config": [ 00:16:27.286 { 00:16:27.286 "method": "accel_set_options", 00:16:27.286 "params": { 00:16:27.286 "buf_count": 2048, 00:16:27.286 "large_cache_size": 16, 00:16:27.286 "sequence_count": 2048, 00:16:27.286 "small_cache_size": 128, 00:16:27.286 "task_count": 2048 00:16:27.286 } 00:16:27.286 } 00:16:27.286 ] 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "subsystem": "bdev", 00:16:27.286 "config": [ 00:16:27.286 { 00:16:27.286 "method": "bdev_set_options", 00:16:27.286 "params": { 00:16:27.286 "bdev_auto_examine": true, 00:16:27.286 "bdev_io_cache_size": 256, 00:16:27.286 "bdev_io_pool_size": 65535, 00:16:27.286 "iobuf_large_cache_size": 16, 00:16:27.286 "iobuf_small_cache_size": 128 00:16:27.286 } 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "method": "bdev_raid_set_options", 00:16:27.286 "params": { 00:16:27.286 "process_max_bandwidth_mb_sec": 0, 00:16:27.286 "process_window_size_kb": 1024 00:16:27.286 } 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "method": "bdev_iscsi_set_options", 00:16:27.286 "params": { 00:16:27.286 "timeout_sec": 30 00:16:27.286 } 00:16:27.286 }, 00:16:27.286 { 00:16:27.286 "method": "bdev_nvme_set_options", 00:16:27.286 "params": { 00:16:27.286 "action_on_timeout": "none", 00:16:27.286 "allow_accel_sequence": false, 00:16:27.286 "arbitration_burst": 0, 00:16:27.286 "bdev_retry_count": 3, 00:16:27.286 "ctrlr_loss_timeout_sec": 0, 00:16:27.286 "delay_cmd_submit": true, 00:16:27.286 "dhchap_dhgroups": [ 00:16:27.286 "null", 00:16:27.286 "ffdhe2048", 00:16:27.286 "ffdhe3072", 00:16:27.286 "ffdhe4096", 00:16:27.286 "ffdhe6144", 00:16:27.286 "ffdhe8192" 00:16:27.286 ], 00:16:27.286 "dhchap_digests": [ 00:16:27.286 "sha256", 00:16:27.286 "sha384", 00:16:27.286 "sha512" 00:16:27.286 ], 00:16:27.286 "disable_auto_failback": false, 00:16:27.286 "fast_io_fail_timeout_sec": 0, 00:16:27.286 "generate_uuids": false, 00:16:27.286 "high_priority_weight": 0, 00:16:27.286 "io_path_stat": false, 00:16:27.286 "io_queue_requests": 0, 00:16:27.286 "keep_alive_timeout_ms": 10000, 00:16:27.286 "low_priority_weight": 0, 00:16:27.286 "medium_priority_weight": 0, 00:16:27.286 "nvme_adminq_poll_period_us": 10000, 00:16:27.286 "nvme_error_stat": false, 00:16:27.286 "nvme_ioq_poll_period_us": 0, 00:16:27.286 "rdma_cm_event_timeout_ms": 0, 00:16:27.286 "rdma_max_cq_size": 0, 00:16:27.286 "rdma_srq_size": 0, 00:16:27.286 "rdma_umr_per_io": false, 00:16:27.286 "reconnect_delay_sec": 0, 00:16:27.286 "timeout_admin_us": 0, 00:16:27.286 "timeout_us": 0, 00:16:27.286 "transport_ack_timeout": 0, 00:16:27.286 "transport_retry_count": 4, 00:16:27.286 "transport_tos": 0 00:16:27.286 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "bdev_nvme_set_hotplug", 00:16:27.287 "params": { 00:16:27.287 "enable": false, 00:16:27.287 "period_us": 100000 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "bdev_malloc_create", 00:16:27.287 "params": { 00:16:27.287 "block_size": 4096, 00:16:27.287 "dif_is_head_of_md": false, 00:16:27.287 "dif_pi_format": 0, 00:16:27.287 "dif_type": 0, 00:16:27.287 "md_size": 0, 00:16:27.287 "name": "malloc0", 00:16:27.287 "num_blocks": 8192, 00:16:27.287 "optimal_io_boundary": 0, 00:16:27.287 "physical_block_size": 4096, 00:16:27.287 "uuid": "d0ab00ee-1677-476b-a704-26d59b550ac1" 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "bdev_wait_for_examine" 00:16:27.287 } 00:16:27.287 ] 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "subsystem": "nbd", 00:16:27.287 "config": [] 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "subsystem": "scheduler", 00:16:27.287 "config": [ 00:16:27.287 { 00:16:27.287 "method": "framework_set_scheduler", 00:16:27.287 "params": { 00:16:27.287 "name": "static" 00:16:27.287 } 00:16:27.287 } 00:16:27.287 ] 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "subsystem": "nvmf", 00:16:27.287 "config": [ 00:16:27.287 { 00:16:27.287 "method": "nvmf_set_config", 00:16:27.287 "params": { 00:16:27.287 "admin_cmd_passthru": { 00:16:27.287 "identify_ctrlr": false 00:16:27.287 }, 00:16:27.287 "dhchap_dhgroups": [ 00:16:27.287 "null", 00:16:27.287 "ffdhe2048", 00:16:27.287 "ffdhe3072", 00:16:27.287 "ffdhe4096", 00:16:27.287 "ffdhe6144", 00:16:27.287 "ffdhe8192" 00:16:27.287 ], 00:16:27.287 "dhchap_digests": [ 00:16:27.287 "sha256", 00:16:27.287 "sha384", 00:16:27.287 "sha512" 00:16:27.287 ], 00:16:27.287 "discovery_filter": "match_any" 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "nvmf_set_max_subsystems", 00:16:27.287 "params": { 00:16:27.287 "max_subsystems": 1024 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "nvmf_set_crdt", 00:16:27.287 "params": { 00:16:27.287 "crdt1": 0, 00:16:27.287 "crdt2": 0, 00:16:27.287 "crdt3": 0 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "nvmf_create_transport", 00:16:27.287 "params": { 00:16:27.287 "abort_timeout_sec": 1, 00:16:27.287 "ack_timeout": 0, 00:16:27.287 "buf_cache_size": 4294967295, 00:16:27.287 "c2h_success": false, 00:16:27.287 "data_wr_pool_size": 0, 00:16:27.287 "dif_insert_or_strip": false, 00:16:27.287 "in_capsule_data_size": 4096, 00:16:27.287 "io_unit_size": 131072, 00:16:27.287 "max_aq_depth": 128, 00:16:27.287 "max_io_qpairs_per_ctrlr": 127, 00:16:27.287 "max_io_size": 131072, 00:16:27.287 "max_queue_depth": 128, 00:16:27.287 "num_shared_buffers": 511, 00:16:27.287 "sock_priority": 0, 00:16:27.287 "trtype": "TCP", 00:16:27.287 "zcopy": false 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "nvmf_create_subsystem", 00:16:27.287 "params": { 00:16:27.287 "allow_any_host": false, 00:16:27.287 "ana_reporting": false, 00:16:27.287 "max_cntlid": 65519, 00:16:27.287 "max_namespaces": 10, 00:16:27.287 "min_cntlid": 1, 00:16:27.287 "model_number": "SPDK bdev Controller", 00:16:27.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.287 "serial_number": "SPDK00000000000001" 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "nvmf_subsystem_add_host", 00:16:27.287 "params": { 00:16:27.287 "host": "nqn.2016-06.io.spdk:host1", 00:16:27.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.287 "psk": "key0" 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "nvmf_subsystem_add_ns", 00:16:27.287 "params": { 00:16:27.287 "namespace": { 00:16:27.287 "bdev_name": "malloc0", 00:16:27.287 "nguid": "D0AB00EE1677476BA70426D59B550AC1", 00:16:27.287 "no_auto_visible": false, 00:16:27.287 "nsid": 1, 00:16:27.287 "uuid": "d0ab00ee-1677-476b-a704-26d59b550ac1" 00:16:27.287 }, 00:16:27.287 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:27.287 } 00:16:27.287 }, 00:16:27.287 { 00:16:27.287 "method": "nvmf_subsystem_add_listener", 00:16:27.287 "params": { 00:16:27.287 "listen_address": { 00:16:27.287 "adrfam": "IPv4", 00:16:27.287 "traddr": "10.0.0.3", 00:16:27.287 "trsvcid": "4420", 00:16:27.287 "trtype": "TCP" 00:16:27.287 }, 00:16:27.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.287 "secure_channel": true 00:16:27.287 } 00:16:27.287 } 00:16:27.287 ] 00:16:27.287 } 00:16:27.287 ] 00:16:27.287 }' 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85719 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85719 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85719 ']' 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.287 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.287 [2024-12-16 11:36:32.629396] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:27.287 [2024-12-16 11:36:32.629474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.546 [2024-12-16 11:36:32.770860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.546 [2024-12-16 11:36:32.822796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.546 [2024-12-16 11:36:32.822861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.546 [2024-12-16 11:36:32.822873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.546 [2024-12-16 11:36:32.822881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.546 [2024-12-16 11:36:32.822889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.546 [2024-12-16 11:36:32.823370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.805 [2024-12-16 11:36:33.064131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.805 [2024-12-16 11:36:33.096080] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.805 [2024-12-16 11:36:33.096342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=85762 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 85762 /var/tmp/bdevperf.sock 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85762 ']' 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.372 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:28.372 "subsystems": [ 00:16:28.372 { 00:16:28.372 "subsystem": "keyring", 00:16:28.372 "config": [ 00:16:28.372 { 00:16:28.372 "method": "keyring_file_add_key", 00:16:28.372 "params": { 00:16:28.372 "name": "key0", 00:16:28.372 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:28.372 } 00:16:28.372 } 00:16:28.372 ] 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "subsystem": "iobuf", 00:16:28.372 "config": [ 00:16:28.372 { 00:16:28.372 "method": "iobuf_set_options", 00:16:28.372 "params": { 00:16:28.372 "enable_numa": false, 00:16:28.372 "large_bufsize": 135168, 00:16:28.372 "large_pool_count": 1024, 00:16:28.372 "small_bufsize": 8192, 00:16:28.372 "small_pool_count": 8192 00:16:28.372 } 00:16:28.372 } 00:16:28.372 ] 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "subsystem": "sock", 00:16:28.372 "config": [ 00:16:28.372 { 00:16:28.372 "method": "sock_set_default_impl", 00:16:28.372 "params": { 00:16:28.372 "impl_name": "posix" 00:16:28.372 } 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "method": "sock_impl_set_options", 00:16:28.372 "params": { 00:16:28.372 "enable_ktls": false, 00:16:28.372 "enable_placement_id": 0, 00:16:28.372 "enable_quickack": false, 00:16:28.372 "enable_recv_pipe": true, 00:16:28.372 "enable_zerocopy_send_client": false, 00:16:28.372 "enable_zerocopy_send_server": true, 00:16:28.372 "impl_name": "ssl", 00:16:28.372 "recv_buf_size": 4096, 00:16:28.372 "send_buf_size": 4096, 00:16:28.372 "tls_version": 0, 00:16:28.372 "zerocopy_threshold": 0 00:16:28.372 } 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "method": "sock_impl_set_options", 00:16:28.372 "params": { 00:16:28.372 "enable_ktls": false, 00:16:28.372 "enable_placement_id": 0, 00:16:28.372 "enable_quickack": false, 00:16:28.372 "enable_recv_pipe": true, 00:16:28.372 "enable_zerocopy_send_client": false, 00:16:28.372 "enable_zerocopy_send_server": true, 00:16:28.372 "impl_name": "posix", 00:16:28.372 "recv_buf_size": 2097152, 00:16:28.372 "send_buf_size": 2097152, 00:16:28.372 "tls_version": 0, 00:16:28.372 "zerocopy_threshold": 0 00:16:28.372 } 00:16:28.372 } 00:16:28.372 ] 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "subsystem": "vmd", 00:16:28.372 "config": [] 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "subsystem": "accel", 00:16:28.372 "config": [ 00:16:28.372 { 00:16:28.372 "method": "accel_set_options", 00:16:28.372 "params": { 00:16:28.372 "buf_count": 2048, 00:16:28.372 "large_cache_size": 16, 00:16:28.372 "sequence_count": 2048, 00:16:28.372 "small_cache_size": 128, 00:16:28.372 "task_count": 2048 00:16:28.372 } 00:16:28.372 } 00:16:28.372 ] 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "subsystem": "bdev", 00:16:28.372 "config": [ 00:16:28.372 { 00:16:28.372 "method": "bdev_set_options", 00:16:28.372 "params": { 00:16:28.372 "bdev_auto_examine": true, 00:16:28.372 "bdev_io_cache_size": 256, 00:16:28.372 "bdev_io_pool_size": 65535, 00:16:28.372 "iobuf_large_cache_size": 16, 00:16:28.372 "iobuf_small_cache_size": 128 00:16:28.372 } 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "method": "bdev_raid_set_options", 00:16:28.372 "params": { 00:16:28.372 "process_max_bandwidth_mb_sec": 0, 00:16:28.372 "process_window_size_kb": 1024 00:16:28.372 } 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "method": "bdev_iscsi_set_options", 00:16:28.372 "params": { 00:16:28.372 "timeout_sec": 30 00:16:28.372 } 00:16:28.372 }, 00:16:28.372 { 00:16:28.372 "method": "bdev_nvme_set_options", 00:16:28.372 "params": { 00:16:28.372 "action_on_timeout": "none", 00:16:28.372 "allow_accel_sequence": false, 00:16:28.372 "arbitration_burst": 0, 00:16:28.372 "bdev_retry_count": 3, 00:16:28.372 "ctrlr_loss_timeout_sec": 0, 00:16:28.372 "delay_cmd_submit": true, 00:16:28.372 "dhchap_dhgroups": [ 00:16:28.372 "null", 00:16:28.372 "ffdhe2048", 00:16:28.372 "ffdhe3072", 00:16:28.372 "ffdhe4096", 00:16:28.372 "ffdhe6144", 00:16:28.372 "ffdhe8192" 00:16:28.372 ], 00:16:28.372 "dhchap_digests": [ 00:16:28.372 "sha256", 00:16:28.372 "sha384", 00:16:28.372 "sha512" 00:16:28.372 ], 00:16:28.372 "disable_auto_failback": false, 00:16:28.372 "fast_io_fail_timeout_sec": 0, 00:16:28.372 "generate_uuids": false, 00:16:28.372 "high_priority_weight": 0, 00:16:28.373 "io_path_stat": false, 00:16:28.373 "io_queue_requests": 512, 00:16:28.373 "keep_alive_timeout_ms": 10000, 00:16:28.373 "low_priority_weight": 0, 00:16:28.373 "medium_priority_weight": 0, 00:16:28.373 "nvme_adminq_poll_period_us": 10000, 00:16:28.373 "nvme_error_stat": false, 00:16:28.373 "nvme_ioq_poll_period_us": 0, 00:16:28.373 "rdma_cm_event_timeout_ms": 0, 00:16:28.373 "rdma_max_cq_size": 0, 00:16:28.373 "rdma_srq_size": 0, 00:16:28.373 "rdma_umr_per_io": false, 00:16:28.373 "reconnect_delay_sec": 0, 00:16:28.373 "timeout_admin_us": 0, 00:16:28.373 "timeout_us": 0, 00:16:28.373 "transport_ack_timeout": 0, 00:16:28.373 "transport_retry_count": 4, 00:16:28.373 "transport_tos": 0 00:16:28.373 } 00:16:28.373 }, 00:16:28.373 { 00:16:28.373 "method": "bdev_nvme_attach_controller", 00:16:28.373 "params": { 00:16:28.373 "adrfam": "IPv4", 00:16:28.373 "ctrlr_loss_timeout_sec": 0, 00:16:28.373 "ddgst": false, 00:16:28.373 "fast_io_fail_timeout_sec": 0, 00:16:28.373 "hdgst": false, 00:16:28.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.373 "multipath": "multipath", 00:16:28.373 "name": "TLSTEST", 00:16:28.373 "prchk_guard": false, 00:16:28.373 "prchk_reftag": false, 00:16:28.373 "psk": "key0", 00:16:28.373 "reconnect_delay_sec": 0, 00:16:28.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.373 "traddr": "10.0.0.3", 00:16:28.373 "trsvcid": "4420", 00:16:28.373 "trtype": "TCP" 00:16:28.373 } 00:16:28.373 }, 00:16:28.373 { 00:16:28.373 "method": "bdev_nvme_set_hotplug", 00:16:28.373 "params": { 00:16:28.373 "enable": false, 00:16:28.373 "period_us": 100000 00:16:28.373 } 00:16:28.373 }, 00:16:28.373 { 00:16:28.373 "method": "bdev_wait_for_examine" 00:16:28.373 } 00:16:28.373 ] 00:16:28.373 }, 00:16:28.373 { 00:16:28.373 "subsystem": "nbd", 00:16:28.373 "config": [] 00:16:28.373 } 00:16:28.373 ] 00:16:28.373 }' 00:16:28.373 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.373 [2024-12-16 11:36:33.741857] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:28.373 [2024-12-16 11:36:33.741972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85762 ] 00:16:28.632 [2024-12-16 11:36:33.890869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.632 [2024-12-16 11:36:33.955248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.890 [2024-12-16 11:36:34.137051] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.457 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.457 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.457 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:29.715 Running I/O for 10 seconds... 00:16:31.622 4224.00 IOPS, 16.50 MiB/s [2024-12-16T11:36:37.971Z] 4224.00 IOPS, 16.50 MiB/s [2024-12-16T11:36:39.348Z] 4224.00 IOPS, 16.50 MiB/s [2024-12-16T11:36:39.914Z] 4200.25 IOPS, 16.41 MiB/s [2024-12-16T11:36:41.290Z] 4206.60 IOPS, 16.43 MiB/s [2024-12-16T11:36:42.225Z] 4206.67 IOPS, 16.43 MiB/s [2024-12-16T11:36:43.160Z] 4207.86 IOPS, 16.44 MiB/s [2024-12-16T11:36:44.095Z] 4210.12 IOPS, 16.45 MiB/s [2024-12-16T11:36:45.031Z] 4220.78 IOPS, 16.49 MiB/s [2024-12-16T11:36:45.031Z] 4222.80 IOPS, 16.50 MiB/s 00:16:39.617 Latency(us) 00:16:39.617 [2024-12-16T11:36:45.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.617 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:39.617 Verification LBA range: start 0x0 length 0x2000 00:16:39.617 TLSTESTn1 : 10.02 4229.12 16.52 0.00 0.00 30210.89 5719.51 22758.87 00:16:39.617 [2024-12-16T11:36:45.031Z] =================================================================================================================== 00:16:39.617 [2024-12-16T11:36:45.031Z] Total : 4229.12 16.52 0.00 0.00 30210.89 5719.51 22758.87 00:16:39.617 { 00:16:39.618 "results": [ 00:16:39.618 { 00:16:39.618 "job": "TLSTESTn1", 00:16:39.618 "core_mask": "0x4", 00:16:39.618 "workload": "verify", 00:16:39.618 "status": "finished", 00:16:39.618 "verify_range": { 00:16:39.618 "start": 0, 00:16:39.618 "length": 8192 00:16:39.618 }, 00:16:39.618 "queue_depth": 128, 00:16:39.618 "io_size": 4096, 00:16:39.618 "runtime": 10.015314, 00:16:39.618 "iops": 4229.123520241103, 00:16:39.618 "mibps": 16.520013750941807, 00:16:39.618 "io_failed": 0, 00:16:39.618 "io_timeout": 0, 00:16:39.618 "avg_latency_us": 30210.88899183544, 00:16:39.618 "min_latency_us": 5719.505454545455, 00:16:39.618 "max_latency_us": 22758.865454545456 00:16:39.618 } 00:16:39.618 ], 00:16:39.618 "core_count": 1 00:16:39.618 } 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 85762 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85762 ']' 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85762 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85762 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:39.618 killing process with pid 85762 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85762' 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85762 00:16:39.618 Received shutdown signal, test time was about 10.000000 seconds 00:16:39.618 00:16:39.618 Latency(us) 00:16:39.618 [2024-12-16T11:36:45.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.618 [2024-12-16T11:36:45.032Z] =================================================================================================================== 00:16:39.618 [2024-12-16T11:36:45.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.618 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85762 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 85719 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85719 ']' 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85719 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85719 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:39.876 killing process with pid 85719 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85719' 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85719 00:16:39.876 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85719 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85919 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85919 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85919 ']' 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.133 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.133 [2024-12-16 11:36:45.501312] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:40.133 [2024-12-16 11:36:45.501454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.391 [2024-12-16 11:36:45.652621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.391 [2024-12-16 11:36:45.721463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.391 [2024-12-16 11:36:45.721540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.391 [2024-12-16 11:36:45.721573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.391 [2024-12-16 11:36:45.721588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.391 [2024-12-16 11:36:45.721602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.391 [2024-12-16 11:36:45.722156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.0XhZFTuKQV 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0XhZFTuKQV 00:16:41.325 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:41.582 [2024-12-16 11:36:46.766517] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.582 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:41.840 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:42.098 [2024-12-16 11:36:47.294648] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:42.098 [2024-12-16 11:36:47.294892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:42.098 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:42.355 malloc0 00:16:42.355 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:42.613 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:42.870 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=86030 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 86030 /var/tmp/bdevperf.sock 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86030 ']' 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.128 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.386 [2024-12-16 11:36:48.575586] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:43.386 [2024-12-16 11:36:48.575700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86030 ] 00:16:43.386 [2024-12-16 11:36:48.725762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.386 [2024-12-16 11:36:48.795440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.320 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.320 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:44.320 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:44.579 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:44.837 [2024-12-16 11:36:50.175279] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.095 nvme0n1 00:16:45.095 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.095 Running I/O for 1 seconds... 00:16:46.515 4037.00 IOPS, 15.77 MiB/s 00:16:46.515 Latency(us) 00:16:46.515 [2024-12-16T11:36:51.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.515 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:46.515 Verification LBA range: start 0x0 length 0x2000 00:16:46.515 nvme0n1 : 1.02 4101.52 16.02 0.00 0.00 30945.72 5004.57 23831.27 00:16:46.515 [2024-12-16T11:36:51.929Z] =================================================================================================================== 00:16:46.515 [2024-12-16T11:36:51.929Z] Total : 4101.52 16.02 0.00 0.00 30945.72 5004.57 23831.27 00:16:46.515 { 00:16:46.515 "results": [ 00:16:46.515 { 00:16:46.515 "job": "nvme0n1", 00:16:46.515 "core_mask": "0x2", 00:16:46.515 "workload": "verify", 00:16:46.515 "status": "finished", 00:16:46.515 "verify_range": { 00:16:46.515 "start": 0, 00:16:46.515 "length": 8192 00:16:46.515 }, 00:16:46.515 "queue_depth": 128, 00:16:46.515 "io_size": 4096, 00:16:46.515 "runtime": 1.015478, 00:16:46.515 "iops": 4101.516724143704, 00:16:46.515 "mibps": 16.021549703686343, 00:16:46.515 "io_failed": 0, 00:16:46.515 "io_timeout": 0, 00:16:46.515 "avg_latency_us": 30945.719415911823, 00:16:46.515 "min_latency_us": 5004.567272727273, 00:16:46.515 "max_latency_us": 23831.272727272728 00:16:46.515 } 00:16:46.515 ], 00:16:46.515 "core_count": 1 00:16:46.515 } 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 86030 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86030 ']' 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86030 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86030 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:46.515 killing process with pid 86030 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86030' 00:16:46.515 Received shutdown signal, test time was about 1.000000 seconds 00:16:46.515 00:16:46.515 Latency(us) 00:16:46.515 [2024-12-16T11:36:51.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.515 [2024-12-16T11:36:51.929Z] =================================================================================================================== 00:16:46.515 [2024-12-16T11:36:51.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86030 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86030 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85919 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85919 ']' 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85919 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85919 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.515 killing process with pid 85919 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85919' 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85919 00:16:46.515 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85919 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86107 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86107 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86107 ']' 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.774 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.775 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.775 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.775 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.775 [2024-12-16 11:36:52.038196] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:46.775 [2024-12-16 11:36:52.038309] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.775 [2024-12-16 11:36:52.180511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.034 [2024-12-16 11:36:52.239801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.034 [2024-12-16 11:36:52.239894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.034 [2024-12-16 11:36:52.239908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.034 [2024-12-16 11:36:52.239916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.034 [2024-12-16 11:36:52.239924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.034 [2024-12-16 11:36:52.240354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.034 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.034 [2024-12-16 11:36:52.414003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.034 malloc0 00:16:47.034 [2024-12-16 11:36:52.445571] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.034 [2024-12-16 11:36:52.445876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=86148 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 86148 /var/tmp/bdevperf.sock 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86148 ']' 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.292 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.292 [2024-12-16 11:36:52.536669] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:47.292 [2024-12-16 11:36:52.536774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86148 ] 00:16:47.292 [2024-12-16 11:36:52.685756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.551 [2024-12-16 11:36:52.740305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.485 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.485 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:48.485 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0XhZFTuKQV 00:16:48.485 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:48.743 [2024-12-16 11:36:54.131683] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:49.002 nvme0n1 00:16:49.002 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:49.002 Running I/O for 1 seconds... 00:16:50.195 4162.00 IOPS, 16.26 MiB/s 00:16:50.195 Latency(us) 00:16:50.195 [2024-12-16T11:36:55.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.195 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:50.195 Verification LBA range: start 0x0 length 0x2000 00:16:50.195 nvme0n1 : 1.02 4190.16 16.37 0.00 0.00 30125.37 7119.59 20375.74 00:16:50.195 [2024-12-16T11:36:55.609Z] =================================================================================================================== 00:16:50.195 [2024-12-16T11:36:55.609Z] Total : 4190.16 16.37 0.00 0.00 30125.37 7119.59 20375.74 00:16:50.195 { 00:16:50.195 "results": [ 00:16:50.195 { 00:16:50.195 "job": "nvme0n1", 00:16:50.195 "core_mask": "0x2", 00:16:50.195 "workload": "verify", 00:16:50.195 "status": "finished", 00:16:50.195 "verify_range": { 00:16:50.195 "start": 0, 00:16:50.195 "length": 8192 00:16:50.195 }, 00:16:50.195 "queue_depth": 128, 00:16:50.195 "io_size": 4096, 00:16:50.195 "runtime": 1.023827, 00:16:50.195 "iops": 4190.1610330651565, 00:16:50.195 "mibps": 16.367816535410768, 00:16:50.195 "io_failed": 0, 00:16:50.195 "io_timeout": 0, 00:16:50.195 "avg_latency_us": 30125.37433184997, 00:16:50.195 "min_latency_us": 7119.592727272728, 00:16:50.195 "max_latency_us": 20375.738181818182 00:16:50.195 } 00:16:50.195 ], 00:16:50.195 "core_count": 1 00:16:50.195 } 00:16:50.195 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:50.195 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:50.195 "subsystems": [ 00:16:50.195 { 00:16:50.195 "subsystem": "keyring", 00:16:50.195 "config": [ 00:16:50.195 { 00:16:50.195 "method": "keyring_file_add_key", 00:16:50.195 "params": { 00:16:50.195 "name": "key0", 00:16:50.195 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:50.195 } 00:16:50.195 } 00:16:50.195 ] 00:16:50.195 }, 00:16:50.195 { 00:16:50.195 "subsystem": "iobuf", 00:16:50.195 "config": [ 00:16:50.195 { 00:16:50.195 "method": "iobuf_set_options", 00:16:50.195 "params": { 00:16:50.195 "enable_numa": false, 00:16:50.195 "large_bufsize": 135168, 00:16:50.195 "large_pool_count": 1024, 00:16:50.195 "small_bufsize": 8192, 00:16:50.195 "small_pool_count": 8192 00:16:50.195 } 00:16:50.195 } 00:16:50.195 ] 00:16:50.195 }, 00:16:50.195 { 00:16:50.195 "subsystem": "sock", 00:16:50.195 "config": [ 00:16:50.195 { 00:16:50.195 "method": "sock_set_default_impl", 00:16:50.195 "params": { 00:16:50.195 "impl_name": "posix" 00:16:50.195 } 00:16:50.195 }, 00:16:50.195 { 00:16:50.195 "method": "sock_impl_set_options", 00:16:50.195 "params": { 00:16:50.195 "enable_ktls": false, 00:16:50.195 "enable_placement_id": 0, 00:16:50.195 "enable_quickack": false, 00:16:50.195 "enable_recv_pipe": true, 00:16:50.195 "enable_zerocopy_send_client": false, 00:16:50.195 "enable_zerocopy_send_server": true, 00:16:50.195 "impl_name": "ssl", 00:16:50.195 "recv_buf_size": 4096, 00:16:50.195 "send_buf_size": 4096, 00:16:50.195 "tls_version": 0, 00:16:50.195 "zerocopy_threshold": 0 00:16:50.195 } 00:16:50.195 }, 00:16:50.195 { 00:16:50.195 "method": "sock_impl_set_options", 00:16:50.195 "params": { 00:16:50.195 "enable_ktls": false, 00:16:50.195 "enable_placement_id": 0, 00:16:50.195 "enable_quickack": false, 00:16:50.195 "enable_recv_pipe": true, 00:16:50.195 "enable_zerocopy_send_client": false, 00:16:50.195 "enable_zerocopy_send_server": true, 00:16:50.195 "impl_name": "posix", 00:16:50.195 "recv_buf_size": 2097152, 00:16:50.196 "send_buf_size": 2097152, 00:16:50.196 "tls_version": 0, 00:16:50.196 "zerocopy_threshold": 0 00:16:50.196 } 00:16:50.196 } 00:16:50.196 ] 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "subsystem": "vmd", 00:16:50.196 "config": [] 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "subsystem": "accel", 00:16:50.196 "config": [ 00:16:50.196 { 00:16:50.196 "method": "accel_set_options", 00:16:50.196 "params": { 00:16:50.196 "buf_count": 2048, 00:16:50.196 "large_cache_size": 16, 00:16:50.196 "sequence_count": 2048, 00:16:50.196 "small_cache_size": 128, 00:16:50.196 "task_count": 2048 00:16:50.196 } 00:16:50.196 } 00:16:50.196 ] 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "subsystem": "bdev", 00:16:50.196 "config": [ 00:16:50.196 { 00:16:50.196 "method": "bdev_set_options", 00:16:50.196 "params": { 00:16:50.196 "bdev_auto_examine": true, 00:16:50.196 "bdev_io_cache_size": 256, 00:16:50.196 "bdev_io_pool_size": 65535, 00:16:50.196 "iobuf_large_cache_size": 16, 00:16:50.196 "iobuf_small_cache_size": 128 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "bdev_raid_set_options", 00:16:50.196 "params": { 00:16:50.196 "process_max_bandwidth_mb_sec": 0, 00:16:50.196 "process_window_size_kb": 1024 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "bdev_iscsi_set_options", 00:16:50.196 "params": { 00:16:50.196 "timeout_sec": 30 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "bdev_nvme_set_options", 00:16:50.196 "params": { 00:16:50.196 "action_on_timeout": "none", 00:16:50.196 "allow_accel_sequence": false, 00:16:50.196 "arbitration_burst": 0, 00:16:50.196 "bdev_retry_count": 3, 00:16:50.196 "ctrlr_loss_timeout_sec": 0, 00:16:50.196 "delay_cmd_submit": true, 00:16:50.196 "dhchap_dhgroups": [ 00:16:50.196 "null", 00:16:50.196 "ffdhe2048", 00:16:50.196 "ffdhe3072", 00:16:50.196 "ffdhe4096", 00:16:50.196 "ffdhe6144", 00:16:50.196 "ffdhe8192" 00:16:50.196 ], 00:16:50.196 "dhchap_digests": [ 00:16:50.196 "sha256", 00:16:50.196 "sha384", 00:16:50.196 "sha512" 00:16:50.196 ], 00:16:50.196 "disable_auto_failback": false, 00:16:50.196 "fast_io_fail_timeout_sec": 0, 00:16:50.196 "generate_uuids": false, 00:16:50.196 "high_priority_weight": 0, 00:16:50.196 "io_path_stat": false, 00:16:50.196 "io_queue_requests": 0, 00:16:50.196 "keep_alive_timeout_ms": 10000, 00:16:50.196 "low_priority_weight": 0, 00:16:50.196 "medium_priority_weight": 0, 00:16:50.196 "nvme_adminq_poll_period_us": 10000, 00:16:50.196 "nvme_error_stat": false, 00:16:50.196 "nvme_ioq_poll_period_us": 0, 00:16:50.196 "rdma_cm_event_timeout_ms": 0, 00:16:50.196 "rdma_max_cq_size": 0, 00:16:50.196 "rdma_srq_size": 0, 00:16:50.196 "rdma_umr_per_io": false, 00:16:50.196 "reconnect_delay_sec": 0, 00:16:50.196 "timeout_admin_us": 0, 00:16:50.196 "timeout_us": 0, 00:16:50.196 "transport_ack_timeout": 0, 00:16:50.196 "transport_retry_count": 4, 00:16:50.196 "transport_tos": 0 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "bdev_nvme_set_hotplug", 00:16:50.196 "params": { 00:16:50.196 "enable": false, 00:16:50.196 "period_us": 100000 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "bdev_malloc_create", 00:16:50.196 "params": { 00:16:50.196 "block_size": 4096, 00:16:50.196 "dif_is_head_of_md": false, 00:16:50.196 "dif_pi_format": 0, 00:16:50.196 "dif_type": 0, 00:16:50.196 "md_size": 0, 00:16:50.196 "name": "malloc0", 00:16:50.196 "num_blocks": 8192, 00:16:50.196 "optimal_io_boundary": 0, 00:16:50.196 "physical_block_size": 4096, 00:16:50.196 "uuid": "8de2715f-a695-449a-b53e-4c4eda887c64" 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "bdev_wait_for_examine" 00:16:50.196 } 00:16:50.196 ] 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "subsystem": "nbd", 00:16:50.196 "config": [] 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "subsystem": "scheduler", 00:16:50.196 "config": [ 00:16:50.196 { 00:16:50.196 "method": "framework_set_scheduler", 00:16:50.196 "params": { 00:16:50.196 "name": "static" 00:16:50.196 } 00:16:50.196 } 00:16:50.196 ] 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "subsystem": "nvmf", 00:16:50.196 "config": [ 00:16:50.196 { 00:16:50.196 "method": "nvmf_set_config", 00:16:50.196 "params": { 00:16:50.196 "admin_cmd_passthru": { 00:16:50.196 "identify_ctrlr": false 00:16:50.196 }, 00:16:50.196 "dhchap_dhgroups": [ 00:16:50.196 "null", 00:16:50.196 "ffdhe2048", 00:16:50.196 "ffdhe3072", 00:16:50.196 "ffdhe4096", 00:16:50.196 "ffdhe6144", 00:16:50.196 "ffdhe8192" 00:16:50.196 ], 00:16:50.196 "dhchap_digests": [ 00:16:50.196 "sha256", 00:16:50.196 "sha384", 00:16:50.196 "sha512" 00:16:50.196 ], 00:16:50.196 "discovery_filter": "match_any" 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "nvmf_set_max_subsystems", 00:16:50.196 "params": { 00:16:50.196 "max_subsystems": 1024 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "nvmf_set_crdt", 00:16:50.196 "params": { 00:16:50.196 "crdt1": 0, 00:16:50.196 "crdt2": 0, 00:16:50.196 "crdt3": 0 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "nvmf_create_transport", 00:16:50.196 "params": { 00:16:50.196 "abort_timeout_sec": 1, 00:16:50.196 "ack_timeout": 0, 00:16:50.196 "buf_cache_size": 4294967295, 00:16:50.196 "c2h_success": false, 00:16:50.196 "data_wr_pool_size": 0, 00:16:50.196 "dif_insert_or_strip": false, 00:16:50.196 "in_capsule_data_size": 4096, 00:16:50.196 "io_unit_size": 131072, 00:16:50.196 "max_aq_depth": 128, 00:16:50.196 "max_io_qpairs_per_ctrlr": 127, 00:16:50.196 "max_io_size": 131072, 00:16:50.196 "max_queue_depth": 128, 00:16:50.196 "num_shared_buffers": 511, 00:16:50.196 "sock_priority": 0, 00:16:50.196 "trtype": "TCP", 00:16:50.196 "zcopy": false 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "nvmf_create_subsystem", 00:16:50.196 "params": { 00:16:50.196 "allow_any_host": false, 00:16:50.196 "ana_reporting": false, 00:16:50.196 "max_cntlid": 65519, 00:16:50.196 "max_namespaces": 32, 00:16:50.196 "min_cntlid": 1, 00:16:50.196 "model_number": "SPDK bdev Controller", 00:16:50.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.196 "serial_number": "00000000000000000000" 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "nvmf_subsystem_add_host", 00:16:50.196 "params": { 00:16:50.196 "host": "nqn.2016-06.io.spdk:host1", 00:16:50.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.196 "psk": "key0" 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "nvmf_subsystem_add_ns", 00:16:50.196 "params": { 00:16:50.196 "namespace": { 00:16:50.196 "bdev_name": "malloc0", 00:16:50.196 "nguid": "8DE2715FA695449AB53E4C4EDA887C64", 00:16:50.196 "no_auto_visible": false, 00:16:50.196 "nsid": 1, 00:16:50.196 "uuid": "8de2715f-a695-449a-b53e-4c4eda887c64" 00:16:50.196 }, 00:16:50.196 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:50.196 } 00:16:50.196 }, 00:16:50.196 { 00:16:50.196 "method": "nvmf_subsystem_add_listener", 00:16:50.196 "params": { 00:16:50.196 "listen_address": { 00:16:50.196 "adrfam": "IPv4", 00:16:50.196 "traddr": "10.0.0.3", 00:16:50.196 "trsvcid": "4420", 00:16:50.196 "trtype": "TCP" 00:16:50.196 }, 00:16:50.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.196 "secure_channel": false, 00:16:50.196 "sock_impl": "ssl" 00:16:50.196 } 00:16:50.196 } 00:16:50.196 ] 00:16:50.196 } 00:16:50.196 ] 00:16:50.196 }' 00:16:50.196 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:50.763 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:50.763 "subsystems": [ 00:16:50.763 { 00:16:50.763 "subsystem": "keyring", 00:16:50.763 "config": [ 00:16:50.763 { 00:16:50.763 "method": "keyring_file_add_key", 00:16:50.763 "params": { 00:16:50.763 "name": "key0", 00:16:50.763 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:50.763 } 00:16:50.763 } 00:16:50.763 ] 00:16:50.763 }, 00:16:50.763 { 00:16:50.763 "subsystem": "iobuf", 00:16:50.763 "config": [ 00:16:50.763 { 00:16:50.763 "method": "iobuf_set_options", 00:16:50.763 "params": { 00:16:50.763 "enable_numa": false, 00:16:50.763 "large_bufsize": 135168, 00:16:50.763 "large_pool_count": 1024, 00:16:50.763 "small_bufsize": 8192, 00:16:50.763 "small_pool_count": 8192 00:16:50.763 } 00:16:50.763 } 00:16:50.763 ] 00:16:50.763 }, 00:16:50.763 { 00:16:50.763 "subsystem": "sock", 00:16:50.763 "config": [ 00:16:50.763 { 00:16:50.763 "method": "sock_set_default_impl", 00:16:50.763 "params": { 00:16:50.763 "impl_name": "posix" 00:16:50.763 } 00:16:50.763 }, 00:16:50.763 { 00:16:50.763 "method": "sock_impl_set_options", 00:16:50.764 "params": { 00:16:50.764 "enable_ktls": false, 00:16:50.764 "enable_placement_id": 0, 00:16:50.764 "enable_quickack": false, 00:16:50.764 "enable_recv_pipe": true, 00:16:50.764 "enable_zerocopy_send_client": false, 00:16:50.764 "enable_zerocopy_send_server": true, 00:16:50.764 "impl_name": "ssl", 00:16:50.764 "recv_buf_size": 4096, 00:16:50.764 "send_buf_size": 4096, 00:16:50.764 "tls_version": 0, 00:16:50.764 "zerocopy_threshold": 0 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "sock_impl_set_options", 00:16:50.764 "params": { 00:16:50.764 "enable_ktls": false, 00:16:50.764 "enable_placement_id": 0, 00:16:50.764 "enable_quickack": false, 00:16:50.764 "enable_recv_pipe": true, 00:16:50.764 "enable_zerocopy_send_client": false, 00:16:50.764 "enable_zerocopy_send_server": true, 00:16:50.764 "impl_name": "posix", 00:16:50.764 "recv_buf_size": 2097152, 00:16:50.764 "send_buf_size": 2097152, 00:16:50.764 "tls_version": 0, 00:16:50.764 "zerocopy_threshold": 0 00:16:50.764 } 00:16:50.764 } 00:16:50.764 ] 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "subsystem": "vmd", 00:16:50.764 "config": [] 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "subsystem": "accel", 00:16:50.764 "config": [ 00:16:50.764 { 00:16:50.764 "method": "accel_set_options", 00:16:50.764 "params": { 00:16:50.764 "buf_count": 2048, 00:16:50.764 "large_cache_size": 16, 00:16:50.764 "sequence_count": 2048, 00:16:50.764 "small_cache_size": 128, 00:16:50.764 "task_count": 2048 00:16:50.764 } 00:16:50.764 } 00:16:50.764 ] 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "subsystem": "bdev", 00:16:50.764 "config": [ 00:16:50.764 { 00:16:50.764 "method": "bdev_set_options", 00:16:50.764 "params": { 00:16:50.764 "bdev_auto_examine": true, 00:16:50.764 "bdev_io_cache_size": 256, 00:16:50.764 "bdev_io_pool_size": 65535, 00:16:50.764 "iobuf_large_cache_size": 16, 00:16:50.764 "iobuf_small_cache_size": 128 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "bdev_raid_set_options", 00:16:50.764 "params": { 00:16:50.764 "process_max_bandwidth_mb_sec": 0, 00:16:50.764 "process_window_size_kb": 1024 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "bdev_iscsi_set_options", 00:16:50.764 "params": { 00:16:50.764 "timeout_sec": 30 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "bdev_nvme_set_options", 00:16:50.764 "params": { 00:16:50.764 "action_on_timeout": "none", 00:16:50.764 "allow_accel_sequence": false, 00:16:50.764 "arbitration_burst": 0, 00:16:50.764 "bdev_retry_count": 3, 00:16:50.764 "ctrlr_loss_timeout_sec": 0, 00:16:50.764 "delay_cmd_submit": true, 00:16:50.764 "dhchap_dhgroups": [ 00:16:50.764 "null", 00:16:50.764 "ffdhe2048", 00:16:50.764 "ffdhe3072", 00:16:50.764 "ffdhe4096", 00:16:50.764 "ffdhe6144", 00:16:50.764 "ffdhe8192" 00:16:50.764 ], 00:16:50.764 "dhchap_digests": [ 00:16:50.764 "sha256", 00:16:50.764 "sha384", 00:16:50.764 "sha512" 00:16:50.764 ], 00:16:50.764 "disable_auto_failback": false, 00:16:50.764 "fast_io_fail_timeout_sec": 0, 00:16:50.764 "generate_uuids": false, 00:16:50.764 "high_priority_weight": 0, 00:16:50.764 "io_path_stat": false, 00:16:50.764 "io_queue_requests": 512, 00:16:50.764 "keep_alive_timeout_ms": 10000, 00:16:50.764 "low_priority_weight": 0, 00:16:50.764 "medium_priority_weight": 0, 00:16:50.764 "nvme_adminq_poll_period_us": 10000, 00:16:50.764 "nvme_error_stat": false, 00:16:50.764 "nvme_ioq_poll_period_us": 0, 00:16:50.764 "rdma_cm_event_timeout_ms": 0, 00:16:50.764 "rdma_max_cq_size": 0, 00:16:50.764 "rdma_srq_size": 0, 00:16:50.764 "rdma_umr_per_io": false, 00:16:50.764 "reconnect_delay_sec": 0, 00:16:50.764 "timeout_admin_us": 0, 00:16:50.764 "timeout_us": 0, 00:16:50.764 "transport_ack_timeout": 0, 00:16:50.764 "transport_retry_count": 4, 00:16:50.764 "transport_tos": 0 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "bdev_nvme_attach_controller", 00:16:50.764 "params": { 00:16:50.764 "adrfam": "IPv4", 00:16:50.764 "ctrlr_loss_timeout_sec": 0, 00:16:50.764 "ddgst": false, 00:16:50.764 "fast_io_fail_timeout_sec": 0, 00:16:50.764 "hdgst": false, 00:16:50.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.764 "multipath": "multipath", 00:16:50.764 "name": "nvme0", 00:16:50.764 "prchk_guard": false, 00:16:50.764 "prchk_reftag": false, 00:16:50.764 "psk": "key0", 00:16:50.764 "reconnect_delay_sec": 0, 00:16:50.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.764 "traddr": "10.0.0.3", 00:16:50.764 "trsvcid": "4420", 00:16:50.764 "trtype": "TCP" 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "bdev_nvme_set_hotplug", 00:16:50.764 "params": { 00:16:50.764 "enable": false, 00:16:50.764 "period_us": 100000 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "bdev_enable_histogram", 00:16:50.764 "params": { 00:16:50.764 "enable": true, 00:16:50.764 "name": "nvme0n1" 00:16:50.764 } 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "method": "bdev_wait_for_examine" 00:16:50.764 } 00:16:50.764 ] 00:16:50.764 }, 00:16:50.764 { 00:16:50.764 "subsystem": "nbd", 00:16:50.764 "config": [] 00:16:50.764 } 00:16:50.764 ] 00:16:50.764 }' 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 86148 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86148 ']' 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86148 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86148 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:50.764 killing process with pid 86148 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86148' 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86148 00:16:50.764 Received shutdown signal, test time was about 1.000000 seconds 00:16:50.764 00:16:50.764 Latency(us) 00:16:50.764 [2024-12-16T11:36:56.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.764 [2024-12-16T11:36:56.178Z] =================================================================================================================== 00:16:50.764 [2024-12-16T11:36:56.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.764 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86148 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 86107 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86107 ']' 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86107 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86107 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.764 killing process with pid 86107 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86107' 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86107 00:16:50.764 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86107 00:16:51.024 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:51.024 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.024 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:51.024 "subsystems": [ 00:16:51.024 { 00:16:51.024 "subsystem": "keyring", 00:16:51.024 "config": [ 00:16:51.024 { 00:16:51.024 "method": "keyring_file_add_key", 00:16:51.024 "params": { 00:16:51.024 "name": "key0", 00:16:51.024 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:51.024 } 00:16:51.024 } 00:16:51.024 ] 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "subsystem": "iobuf", 00:16:51.024 "config": [ 00:16:51.024 { 00:16:51.024 "method": "iobuf_set_options", 00:16:51.024 "params": { 00:16:51.024 "enable_numa": false, 00:16:51.024 "large_bufsize": 135168, 00:16:51.024 "large_pool_count": 1024, 00:16:51.024 "small_bufsize": 8192, 00:16:51.024 "small_pool_count": 8192 00:16:51.024 } 00:16:51.024 } 00:16:51.024 ] 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "subsystem": "sock", 00:16:51.024 "config": [ 00:16:51.024 { 00:16:51.024 "method": "sock_set_default_impl", 00:16:51.024 "params": { 00:16:51.024 "impl_name": "posix" 00:16:51.024 } 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "method": "sock_impl_set_options", 00:16:51.024 "params": { 00:16:51.024 "enable_ktls": false, 00:16:51.024 "enable_placement_id": 0, 00:16:51.024 "enable_quickack": false, 00:16:51.024 "enable_recv_pipe": true, 00:16:51.024 "enable_zerocopy_send_client": false, 00:16:51.024 "enable_zerocopy_send_server": true, 00:16:51.024 "impl_name": "ssl", 00:16:51.024 "recv_buf_size": 4096, 00:16:51.024 "send_buf_size": 4096, 00:16:51.024 "tls_version": 0, 00:16:51.024 "zerocopy_threshold": 0 00:16:51.024 } 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "method": "sock_impl_set_options", 00:16:51.024 "params": { 00:16:51.024 "enable_ktls": false, 00:16:51.024 "enable_placement_id": 0, 00:16:51.024 "enable_quickack": false, 00:16:51.024 "enable_recv_pipe": true, 00:16:51.024 "enable_zerocopy_send_client": false, 00:16:51.024 "enable_zerocopy_send_server": true, 00:16:51.024 "impl_name": "posix", 00:16:51.024 "recv_buf_size": 2097152, 00:16:51.024 "send_buf_size": 2097152, 00:16:51.024 "tls_version": 0, 00:16:51.024 "zerocopy_threshold": 0 00:16:51.024 } 00:16:51.024 } 00:16:51.024 ] 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "subsystem": "vmd", 00:16:51.024 "config": [] 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "subsystem": "accel", 00:16:51.024 "config": [ 00:16:51.024 { 00:16:51.024 "method": "accel_set_options", 00:16:51.024 "params": { 00:16:51.024 "buf_count": 2048, 00:16:51.024 "large_cache_size": 16, 00:16:51.024 "sequence_count": 2048, 00:16:51.024 "small_cache_size": 128, 00:16:51.024 "task_count": 2048 00:16:51.024 } 00:16:51.024 } 00:16:51.024 ] 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "subsystem": "bdev", 00:16:51.024 "config": [ 00:16:51.024 { 00:16:51.024 "method": "bdev_set_options", 00:16:51.024 "params": { 00:16:51.024 "bdev_auto_examine": true, 00:16:51.024 "bdev_io_cache_size": 256, 00:16:51.024 "bdev_io_pool_size": 65535, 00:16:51.024 "iobuf_large_cache_size": 16, 00:16:51.024 "iobuf_small_cache_size": 128 00:16:51.024 } 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "method": "bdev_raid_set_options", 00:16:51.024 "params": { 00:16:51.024 "process_max_bandwidth_mb_sec": 0, 00:16:51.024 "process_window_size_kb": 1024 00:16:51.024 } 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "method": "bdev_iscsi_set_options", 00:16:51.024 "params": { 00:16:51.024 "timeout_sec": 30 00:16:51.024 } 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "method": "bdev_nvme_set_options", 00:16:51.024 "params": { 00:16:51.024 "action_on_timeout": "none", 00:16:51.024 "allow_accel_sequence": false, 00:16:51.024 "arbitration_burst": 0, 00:16:51.024 "bdev_retry_count": 3, 00:16:51.024 "ctrlr_loss_timeout_sec": 0, 00:16:51.024 "delay_cmd_submit": true, 00:16:51.024 "dhchap_dhgroups": [ 00:16:51.024 "null", 00:16:51.024 "ffdhe2048", 00:16:51.024 "ffdhe3072", 00:16:51.024 "ffdhe4096", 00:16:51.024 "ffdhe6144", 00:16:51.024 "ffdhe8192" 00:16:51.024 ], 00:16:51.024 "dhchap_digests": [ 00:16:51.024 "sha256", 00:16:51.024 "sha384", 00:16:51.024 "sha512" 00:16:51.024 ], 00:16:51.024 "disable_auto_failback": false, 00:16:51.024 "fast_io_fail_timeout_sec": 0, 00:16:51.024 "generate_uuids": false, 00:16:51.024 "high_priority_weight": 0, 00:16:51.024 "io_path_stat": false, 00:16:51.024 "io_queue_requests": 0, 00:16:51.024 "keep_alive_timeout_ms": 10000, 00:16:51.024 "low_priority_weight": 0, 00:16:51.024 "medium_priority_weight": 0, 00:16:51.024 "nvme_adminq_poll_period_us": 10000, 00:16:51.024 "nvme_error_stat": false, 00:16:51.024 "nvme_ioq_poll_period_us": 0, 00:16:51.024 "rdma_cm_event_timeout_ms": 0, 00:16:51.024 "rdma_max_cq_size": 0, 00:16:51.024 "rdma_srq_size": 0, 00:16:51.024 "rdma_umr_per_io": false, 00:16:51.024 "reconnect_delay_sec": 0, 00:16:51.024 "timeout_admin_us": 0, 00:16:51.024 "timeout_us": 0, 00:16:51.024 "transport_ack_timeout": 0, 00:16:51.024 "transport_retry_count": 4, 00:16:51.024 "transport_tos": 0 00:16:51.024 } 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "method": "bdev_nvme_set_hotplug", 00:16:51.024 "params": { 00:16:51.024 "enable": false, 00:16:51.024 "period_us": 100000 00:16:51.024 } 00:16:51.024 }, 00:16:51.024 { 00:16:51.024 "method": "bdev_malloc_create", 00:16:51.024 "params": { 00:16:51.024 "block_size": 4096, 00:16:51.024 "dif_is_head_of_md": false, 00:16:51.024 "dif_pi_format": 0, 00:16:51.024 "dif_type": 0, 00:16:51.024 "md_size": 0, 00:16:51.024 "name": "malloc0", 00:16:51.024 "num_blocks": 8192, 00:16:51.024 "optimal_io_boundary": 0, 00:16:51.025 "physical_block_size": 4096, 00:16:51.025 "uuid": "8de2715f-a695-449a-b53e-4c4eda887c64" 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "bdev_wait_for_examine" 00:16:51.025 } 00:16:51.025 ] 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "subsystem": "nbd", 00:16:51.025 "config": [] 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "subsystem": "scheduler", 00:16:51.025 "config": [ 00:16:51.025 { 00:16:51.025 "method": "framework_set_scheduler", 00:16:51.025 "params": { 00:16:51.025 "name": "static" 00:16:51.025 } 00:16:51.025 } 00:16:51.025 ] 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "subsystem": "nvmf", 00:16:51.025 "config": [ 00:16:51.025 { 00:16:51.025 "method": "nvmf_set_config", 00:16:51.025 "params": { 00:16:51.025 "admin_cmd_passthru": { 00:16:51.025 "identify_ctrlr": false 00:16:51.025 }, 00:16:51.025 "dhchap_dhgroups": [ 00:16:51.025 "null", 00:16:51.025 "ffdhe2048", 00:16:51.025 "ffdhe3072", 00:16:51.025 "ffdhe4096", 00:16:51.025 "ffdhe6144", 00:16:51.025 "ffdhe8192" 00:16:51.025 ], 00:16:51.025 "dhchap_digests": [ 00:16:51.025 "sha256", 00:16:51.025 "sha384", 00:16:51.025 "sha512" 00:16:51.025 ], 00:16:51.025 "discovery_filter": "match_any" 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "nvmf_set_max_subsystems", 00:16:51.025 "params": { 00:16:51.025 "max_subsystems": 1024 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "nvmf_set_crdt", 00:16:51.025 "params": { 00:16:51.025 "crdt1": 0, 00:16:51.025 "crdt2": 0, 00:16:51.025 "crdt3": 0 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "nvmf_create_transport", 00:16:51.025 "params": { 00:16:51.025 "abort_timeout_sec": 1, 00:16:51.025 "ack_timeout": 0, 00:16:51.025 "buf_cache_size": 4294967295, 00:16:51.025 "c2h_success": false, 00:16:51.025 "data_wr_pool_size": 0, 00:16:51.025 "dif_insert_or_strip": false, 00:16:51.025 "in_capsule_data_size": 4096, 00:16:51.025 "io_unit_size": 131072, 00:16:51.025 "max_aq_depth": 128, 00:16:51.025 "max_io_qpairs_per_ctrlr": 127, 00:16:51.025 "max_io_size": 131072, 00:16:51.025 "max_queue_depth": 128, 00:16:51.025 "num_shared_buffers": 511, 00:16:51.025 "sock_priority": 0, 00:16:51.025 "trtype": "TCP", 00:16:51.025 "zcopy": false 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "nvmf_create_subsystem", 00:16:51.025 "params": { 00:16:51.025 "allow_any_host": false, 00:16:51.025 "ana_reporting": false, 00:16:51.025 "max_cntlid": 65519, 00:16:51.025 "max_namespaces": 32, 00:16:51.025 "min_cntlid": 1, 00:16:51.025 "model_number": "SPDK bdev Controller", 00:16:51.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.025 "serial_number": "00000000000000000000" 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "nvmf_subsystem_add_host", 00:16:51.025 "params": { 00:16:51.025 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.025 "psk": "key0" 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "nvmf_subsystem_add_ns", 00:16:51.025 "params": { 00:16:51.025 "namespace": { 00:16:51.025 "bdev_name": "malloc0", 00:16:51.025 "nguid": "8DE2715FA695449AB53E4C4EDA887C64", 00:16:51.025 "no_auto_visible": false, 00:16:51.025 "nsid": 1, 00:16:51.025 "uuid": "8de2715f-a695-449a-b53e-4c4eda887c64" 00:16:51.025 }, 00:16:51.025 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:51.025 } 00:16:51.025 }, 00:16:51.025 { 00:16:51.025 "method": "nvmf_subsystem_add_listener", 00:16:51.025 "params": { 00:16:51.025 "listen_address": { 00:16:51.025 "adrfam": "IPv4", 00:16:51.025 "traddr": "10.0.0.3", 00:16:51.025 "trsvcid": "4420", 00:16:51.025 "trtype": "TCP" 00:16:51.025 }, 00:16:51.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.025 "secure_channel": false, 00:16:51.025 "sock_impl": "ssl" 00:16:51.025 } 00:16:51.025 } 00:16:51.025 ] 00:16:51.025 } 00:16:51.025 ] 00:16:51.025 }' 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86236 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86236 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86236 ']' 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.025 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.025 [2024-12-16 11:36:56.399252] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:51.025 [2024-12-16 11:36:56.399348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.284 [2024-12-16 11:36:56.543411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.284 [2024-12-16 11:36:56.599180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.284 [2024-12-16 11:36:56.599234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.284 [2024-12-16 11:36:56.599262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.284 [2024-12-16 11:36:56.599271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.284 [2024-12-16 11:36:56.599279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.284 [2024-12-16 11:36:56.599780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.543 [2024-12-16 11:36:56.843660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.543 [2024-12-16 11:36:56.875629] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.543 [2024-12-16 11:36:56.875902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=86280 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 86280 /var/tmp/bdevperf.sock 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86280 ']' 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.110 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:52.110 "subsystems": [ 00:16:52.110 { 00:16:52.110 "subsystem": "keyring", 00:16:52.110 "config": [ 00:16:52.110 { 00:16:52.110 "method": "keyring_file_add_key", 00:16:52.110 "params": { 00:16:52.110 "name": "key0", 00:16:52.110 "path": "/tmp/tmp.0XhZFTuKQV" 00:16:52.110 } 00:16:52.110 } 00:16:52.110 ] 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "subsystem": "iobuf", 00:16:52.110 "config": [ 00:16:52.110 { 00:16:52.110 "method": "iobuf_set_options", 00:16:52.110 "params": { 00:16:52.110 "enable_numa": false, 00:16:52.110 "large_bufsize": 135168, 00:16:52.110 "large_pool_count": 1024, 00:16:52.110 "small_bufsize": 8192, 00:16:52.110 "small_pool_count": 8192 00:16:52.110 } 00:16:52.110 } 00:16:52.110 ] 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "subsystem": "sock", 00:16:52.110 "config": [ 00:16:52.110 { 00:16:52.110 "method": "sock_set_default_impl", 00:16:52.110 "params": { 00:16:52.110 "impl_name": "posix" 00:16:52.110 } 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "method": "sock_impl_set_options", 00:16:52.110 "params": { 00:16:52.110 "enable_ktls": false, 00:16:52.110 "enable_placement_id": 0, 00:16:52.110 "enable_quickack": false, 00:16:52.110 "enable_recv_pipe": true, 00:16:52.110 "enable_zerocopy_send_client": false, 00:16:52.110 "enable_zerocopy_send_server": true, 00:16:52.110 "impl_name": "ssl", 00:16:52.110 "recv_buf_size": 4096, 00:16:52.110 "send_buf_size": 4096, 00:16:52.110 "tls_version": 0, 00:16:52.110 "zerocopy_threshold": 0 00:16:52.110 } 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "method": "sock_impl_set_options", 00:16:52.110 "params": { 00:16:52.110 "enable_ktls": false, 00:16:52.110 "enable_placement_id": 0, 00:16:52.110 "enable_quickack": false, 00:16:52.110 "enable_recv_pipe": true, 00:16:52.110 "enable_zerocopy_send_client": false, 00:16:52.110 "enable_zerocopy_send_server": true, 00:16:52.110 "impl_name": "posix", 00:16:52.110 "recv_buf_size": 2097152, 00:16:52.110 "send_buf_size": 2097152, 00:16:52.110 "tls_version": 0, 00:16:52.110 "zerocopy_threshold": 0 00:16:52.110 } 00:16:52.110 } 00:16:52.110 ] 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "subsystem": "vmd", 00:16:52.110 "config": [] 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "subsystem": "accel", 00:16:52.110 "config": [ 00:16:52.110 { 00:16:52.110 "method": "accel_set_options", 00:16:52.110 "params": { 00:16:52.110 "buf_count": 2048, 00:16:52.110 "large_cache_size": 16, 00:16:52.110 "sequence_count": 2048, 00:16:52.110 "small_cache_size": 128, 00:16:52.110 "task_count": 2048 00:16:52.110 } 00:16:52.110 } 00:16:52.110 ] 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "subsystem": "bdev", 00:16:52.110 "config": [ 00:16:52.110 { 00:16:52.110 "method": "bdev_set_options", 00:16:52.110 "params": { 00:16:52.110 "bdev_auto_examine": true, 00:16:52.110 "bdev_io_cache_size": 256, 00:16:52.110 "bdev_io_pool_size": 65535, 00:16:52.110 "iobuf_large_cache_size": 16, 00:16:52.110 "iobuf_small_cache_size": 128 00:16:52.110 } 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "method": "bdev_raid_set_options", 00:16:52.110 "params": { 00:16:52.110 "process_max_bandwidth_mb_sec": 0, 00:16:52.110 "process_window_size_kb": 1024 00:16:52.110 } 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "method": "bdev_iscsi_set_options", 00:16:52.110 "params": { 00:16:52.110 "timeout_sec": 30 00:16:52.110 } 00:16:52.110 }, 00:16:52.110 { 00:16:52.110 "method": "bdev_nvme_set_options", 00:16:52.110 "params": { 00:16:52.110 "action_on_timeout": "none", 00:16:52.110 "allow_accel_sequence": false, 00:16:52.110 "arbitration_burst": 0, 00:16:52.110 "bdev_retry_count": 3, 00:16:52.110 "ctrlr_loss_timeout_sec": 0, 00:16:52.110 "delay_cmd_submit": true, 00:16:52.110 "dhchap_dhgroups": [ 00:16:52.110 "null", 00:16:52.110 "ffdhe2048", 00:16:52.110 "ffdhe3072", 00:16:52.110 "ffdhe4096", 00:16:52.110 "ffdhe6144", 00:16:52.110 "ffdhe8192" 00:16:52.110 ], 00:16:52.110 "dhchap_digests": [ 00:16:52.110 "sha256", 00:16:52.110 "sha384", 00:16:52.110 "sha512" 00:16:52.110 ], 00:16:52.110 "disable_auto_failback": false, 00:16:52.110 "fast_io_fail_timeout_sec": 0, 00:16:52.110 "generate_uuids": false, 00:16:52.110 "high_priority_weight": 0, 00:16:52.110 "io_path_stat": false, 00:16:52.110 "io_queue_requests": 512, 00:16:52.110 "keep_alive_timeout_ms": 10000, 00:16:52.110 "low_priority_weight": 0, 00:16:52.110 "medium_priority_weight": 0, 00:16:52.110 "nvme_adminq_poll_period_us": 10000, 00:16:52.110 "nvme_error_stat": false, 00:16:52.110 "nvme_ioq_poll_period_us": 0, 00:16:52.110 "rdma_cm_event_timeout_ms": 0, 00:16:52.110 "rdma_max_cq_size": 0, 00:16:52.110 "rdma_srq_size": 0, 00:16:52.110 "rdma_umr_per_io": false, 00:16:52.110 "reconnect_delay_sec": 0, 00:16:52.110 "timeout_admin_us": 0, 00:16:52.110 "timeout_us": 0, 00:16:52.110 "transport_ack_timeout": 0, 00:16:52.110 "transport_retry_count": 4, 00:16:52.110 "transport_tos": 0 00:16:52.110 } 00:16:52.110 }, 00:16:52.110 { 00:16:52.111 "method": "bdev_nvme_attach_controller", 00:16:52.111 "params": { 00:16:52.111 "adrfam": "IPv4", 00:16:52.111 "ctrlr_loss_timeout_sec": 0, 00:16:52.111 "ddgst": false, 00:16:52.111 "fast_io_fail_timeout_sec": 0, 00:16:52.111 "hdgst": false, 00:16:52.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.111 "multipath": "multipath", 00:16:52.111 "name": "nvme0", 00:16:52.111 "prchk_guard": false, 00:16:52.111 "prchk_reftag": false, 00:16:52.111 "psk": "key0", 00:16:52.111 "reconnect_delay_sec": 0, 00:16:52.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.111 "traddr": "10.0.0.3", 00:16:52.111 "trsvcid": "4420", 00:16:52.111 "trtype": "TCP" 00:16:52.111 } 00:16:52.111 }, 00:16:52.111 { 00:16:52.111 "method": "bdev_nvme_set_hotplug", 00:16:52.111 "params": { 00:16:52.111 "enable": false, 00:16:52.111 "period_us": 100000 00:16:52.111 } 00:16:52.111 }, 00:16:52.111 { 00:16:52.111 "method": "bdev_enable_histogram", 00:16:52.111 "params": { 00:16:52.111 "enable": true, 00:16:52.111 "name": "nvme0n1" 00:16:52.111 } 00:16:52.111 }, 00:16:52.111 { 00:16:52.111 "method": "bdev_wait_for_examine" 00:16:52.111 } 00:16:52.111 ] 00:16:52.111 }, 00:16:52.111 { 00:16:52.111 "subsystem": "nbd", 00:16:52.111 "config": [] 00:16:52.111 } 00:16:52.111 ] 00:16:52.111 }' 00:16:52.111 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:52.111 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.111 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.111 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.111 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.111 [2024-12-16 11:36:57.473962] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:52.111 [2024-12-16 11:36:57.474662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86280 ] 00:16:52.388 [2024-12-16 11:36:57.619175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.388 [2024-12-16 11:36:57.681073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.646 [2024-12-16 11:36:57.860072] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.214 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.214 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:53.214 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:53.214 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:53.472 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.472 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.731 Running I/O for 1 seconds... 00:16:54.666 4096.00 IOPS, 16.00 MiB/s 00:16:54.666 Latency(us) 00:16:54.666 [2024-12-16T11:37:00.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.666 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:54.666 Verification LBA range: start 0x0 length 0x2000 00:16:54.666 nvme0n1 : 1.02 4141.92 16.18 0.00 0.00 30579.73 7238.75 19660.80 00:16:54.666 [2024-12-16T11:37:00.080Z] =================================================================================================================== 00:16:54.666 [2024-12-16T11:37:00.080Z] Total : 4141.92 16.18 0.00 0.00 30579.73 7238.75 19660.80 00:16:54.666 { 00:16:54.666 "results": [ 00:16:54.666 { 00:16:54.666 "job": "nvme0n1", 00:16:54.666 "core_mask": "0x2", 00:16:54.666 "workload": "verify", 00:16:54.666 "status": "finished", 00:16:54.666 "verify_range": { 00:16:54.666 "start": 0, 00:16:54.666 "length": 8192 00:16:54.666 }, 00:16:54.666 "queue_depth": 128, 00:16:54.666 "io_size": 4096, 00:16:54.666 "runtime": 1.019817, 00:16:54.666 "iops": 4141.919579689297, 00:16:54.666 "mibps": 16.179373358161317, 00:16:54.666 "io_failed": 0, 00:16:54.666 "io_timeout": 0, 00:16:54.666 "avg_latency_us": 30579.73245179063, 00:16:54.666 "min_latency_us": 7238.749090909091, 00:16:54.666 "max_latency_us": 19660.8 00:16:54.666 } 00:16:54.666 ], 00:16:54.666 "core_count": 1 00:16:54.666 } 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:54.666 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:54.666 nvmf_trace.0 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 86280 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86280 ']' 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86280 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86280 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.925 killing process with pid 86280 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86280' 00:16:54.925 Received shutdown signal, test time was about 1.000000 seconds 00:16:54.925 00:16:54.925 Latency(us) 00:16:54.925 [2024-12-16T11:37:00.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.925 [2024-12-16T11:37:00.339Z] =================================================================================================================== 00:16:54.925 [2024-12-16T11:37:00.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86280 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86280 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:54.925 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.184 rmmod nvme_tcp 00:16:55.184 rmmod nvme_fabrics 00:16:55.184 rmmod nvme_keyring 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 86236 ']' 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 86236 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86236 ']' 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86236 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86236 00:16:55.184 killing process with pid 86236 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86236' 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86236 00:16:55.184 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86236 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:55.443 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.71z0YlyoAQ /tmp/tmp.4sGuISk101 /tmp/tmp.0XhZFTuKQV 00:16:55.702 00:16:55.702 real 1m28.395s 00:16:55.702 user 2m25.693s 00:16:55.702 sys 0m27.800s 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.702 ************************************ 00:16:55.702 END TEST nvmf_tls 00:16:55.702 ************************************ 00:16:55.702 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.702 ************************************ 00:16:55.702 START TEST nvmf_fips 00:16:55.702 ************************************ 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:55.702 * Looking for test storage... 00:16:55.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:16:55.702 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.962 --rc genhtml_branch_coverage=1 00:16:55.962 --rc genhtml_function_coverage=1 00:16:55.962 --rc genhtml_legend=1 00:16:55.962 --rc geninfo_all_blocks=1 00:16:55.962 --rc geninfo_unexecuted_blocks=1 00:16:55.962 00:16:55.962 ' 00:16:55.962 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.962 --rc genhtml_branch_coverage=1 00:16:55.962 --rc genhtml_function_coverage=1 00:16:55.963 --rc genhtml_legend=1 00:16:55.963 --rc geninfo_all_blocks=1 00:16:55.963 --rc geninfo_unexecuted_blocks=1 00:16:55.963 00:16:55.963 ' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:55.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.963 --rc genhtml_branch_coverage=1 00:16:55.963 --rc genhtml_function_coverage=1 00:16:55.963 --rc genhtml_legend=1 00:16:55.963 --rc geninfo_all_blocks=1 00:16:55.963 --rc geninfo_unexecuted_blocks=1 00:16:55.963 00:16:55.963 ' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:55.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.963 --rc genhtml_branch_coverage=1 00:16:55.963 --rc genhtml_function_coverage=1 00:16:55.963 --rc genhtml_legend=1 00:16:55.963 --rc geninfo_all_blocks=1 00:16:55.963 --rc geninfo_unexecuted_blocks=1 00:16:55.963 00:16:55.963 ' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:55.963 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.964 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:56.223 Error setting digest 00:16:56.223 400266772A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:56.223 400266772A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:56.223 Cannot find device "nvmf_init_br" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:56.223 Cannot find device "nvmf_init_br2" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:56.223 Cannot find device "nvmf_tgt_br" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.223 Cannot find device "nvmf_tgt_br2" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:56.223 Cannot find device "nvmf_init_br" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:56.223 Cannot find device "nvmf_init_br2" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:56.223 Cannot find device "nvmf_tgt_br" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:56.223 Cannot find device "nvmf_tgt_br2" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:56.223 Cannot find device "nvmf_br" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:56.223 Cannot find device "nvmf_init_if" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:56.223 Cannot find device "nvmf_init_if2" 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.223 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.482 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.482 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.482 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.482 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:56.482 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:56.482 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:56.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:16:56.483 00:16:56.483 --- 10.0.0.3 ping statistics --- 00:16:56.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.483 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:56.483 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:56.483 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:16:56.483 00:16:56.483 --- 10.0.0.4 ping statistics --- 00:16:56.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.483 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:56.483 00:16:56.483 --- 10.0.0.1 ping statistics --- 00:16:56.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.483 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:56.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:56.483 00:16:56.483 --- 10.0.0.2 ping statistics --- 00:16:56.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.483 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=86622 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 86622 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86622 ']' 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.483 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.741 [2024-12-16 11:37:01.936687] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:56.741 [2024-12-16 11:37:01.937029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.741 [2024-12-16 11:37:02.091512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.000 [2024-12-16 11:37:02.160264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.000 [2024-12-16 11:37:02.160322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.000 [2024-12-16 11:37:02.160337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.000 [2024-12-16 11:37:02.160348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.000 [2024-12-16 11:37:02.160358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.000 [2024-12-16 11:37:02.160814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.934 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.934 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:57.934 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.934 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.934 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.w9z 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.w9z 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.w9z 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.w9z 00:16:57.934 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.934 [2024-12-16 11:37:03.267609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.934 [2024-12-16 11:37:03.283585] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:57.934 [2024-12-16 11:37:03.283788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:57.934 malloc0 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=86677 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 86677 /var/tmp/bdevperf.sock 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86677 ']' 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.192 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:58.192 [2024-12-16 11:37:03.442476] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:16:58.192 [2024-12-16 11:37:03.442564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86677 ] 00:16:58.192 [2024-12-16 11:37:03.595250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.450 [2024-12-16 11:37:03.661337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.016 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.016 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:59.016 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.w9z 00:16:59.601 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:59.875 [2024-12-16 11:37:05.000562] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.875 TLSTESTn1 00:16:59.875 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:59.875 Running I/O for 10 seconds... 00:17:01.815 4097.00 IOPS, 16.00 MiB/s [2024-12-16T11:37:08.603Z] 4177.00 IOPS, 16.32 MiB/s [2024-12-16T11:37:09.538Z] 4196.67 IOPS, 16.39 MiB/s [2024-12-16T11:37:10.472Z] 4221.75 IOPS, 16.49 MiB/s [2024-12-16T11:37:11.442Z] 4233.00 IOPS, 16.54 MiB/s [2024-12-16T11:37:12.376Z] 4241.00 IOPS, 16.57 MiB/s [2024-12-16T11:37:13.311Z] 4247.29 IOPS, 16.59 MiB/s [2024-12-16T11:37:14.243Z] 4256.38 IOPS, 16.63 MiB/s [2024-12-16T11:37:15.615Z] 4257.78 IOPS, 16.63 MiB/s [2024-12-16T11:37:15.615Z] 4268.30 IOPS, 16.67 MiB/s 00:17:10.201 Latency(us) 00:17:10.201 [2024-12-16T11:37:15.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.201 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:10.201 Verification LBA range: start 0x0 length 0x2000 00:17:10.201 TLSTESTn1 : 10.02 4274.22 16.70 0.00 0.00 29891.54 6047.19 29193.31 00:17:10.201 [2024-12-16T11:37:15.615Z] =================================================================================================================== 00:17:10.201 [2024-12-16T11:37:15.615Z] Total : 4274.22 16.70 0.00 0.00 29891.54 6047.19 29193.31 00:17:10.201 { 00:17:10.201 "results": [ 00:17:10.201 { 00:17:10.201 "job": "TLSTESTn1", 00:17:10.201 "core_mask": "0x4", 00:17:10.201 "workload": "verify", 00:17:10.201 "status": "finished", 00:17:10.201 "verify_range": { 00:17:10.201 "start": 0, 00:17:10.201 "length": 8192 00:17:10.201 }, 00:17:10.201 "queue_depth": 128, 00:17:10.201 "io_size": 4096, 00:17:10.201 "runtime": 10.015625, 00:17:10.201 "iops": 4274.221528861154, 00:17:10.201 "mibps": 16.696177847113884, 00:17:10.201 "io_failed": 0, 00:17:10.201 "io_timeout": 0, 00:17:10.201 "avg_latency_us": 29891.53722730352, 00:17:10.201 "min_latency_us": 6047.185454545454, 00:17:10.201 "max_latency_us": 29193.30909090909 00:17:10.201 } 00:17:10.201 ], 00:17:10.201 "core_count": 1 00:17:10.201 } 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:10.201 nvmf_trace.0 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86677 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86677 ']' 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86677 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86677 00:17:10.201 killing process with pid 86677 00:17:10.201 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.201 00:17:10.201 Latency(us) 00:17:10.201 [2024-12-16T11:37:15.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.201 [2024-12-16T11:37:15.615Z] =================================================================================================================== 00:17:10.201 [2024-12-16T11:37:15.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86677' 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86677 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86677 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:10.201 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:10.459 rmmod nvme_tcp 00:17:10.459 rmmod nvme_fabrics 00:17:10.459 rmmod nvme_keyring 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 86622 ']' 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 86622 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86622 ']' 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86622 00:17:10.459 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:10.460 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.460 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86622 00:17:10.460 killing process with pid 86622 00:17:10.460 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:10.460 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:10.460 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86622' 00:17:10.460 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86622 00:17:10.460 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86622 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:10.720 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:10.720 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.w9z 00:17:10.978 00:17:10.978 real 0m15.189s 00:17:10.978 user 0m21.518s 00:17:10.978 sys 0m5.644s 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 ************************************ 00:17:10.978 END TEST nvmf_fips 00:17:10.978 ************************************ 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 ************************************ 00:17:10.978 START TEST nvmf_control_msg_list 00:17:10.978 ************************************ 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:10.978 * Looking for test storage... 00:17:10.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:17:10.978 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.238 --rc genhtml_branch_coverage=1 00:17:11.238 --rc genhtml_function_coverage=1 00:17:11.238 --rc genhtml_legend=1 00:17:11.238 --rc geninfo_all_blocks=1 00:17:11.238 --rc geninfo_unexecuted_blocks=1 00:17:11.238 00:17:11.238 ' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.238 --rc genhtml_branch_coverage=1 00:17:11.238 --rc genhtml_function_coverage=1 00:17:11.238 --rc genhtml_legend=1 00:17:11.238 --rc geninfo_all_blocks=1 00:17:11.238 --rc geninfo_unexecuted_blocks=1 00:17:11.238 00:17:11.238 ' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.238 --rc genhtml_branch_coverage=1 00:17:11.238 --rc genhtml_function_coverage=1 00:17:11.238 --rc genhtml_legend=1 00:17:11.238 --rc geninfo_all_blocks=1 00:17:11.238 --rc geninfo_unexecuted_blocks=1 00:17:11.238 00:17:11.238 ' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.238 --rc genhtml_branch_coverage=1 00:17:11.238 --rc genhtml_function_coverage=1 00:17:11.238 --rc genhtml_legend=1 00:17:11.238 --rc geninfo_all_blocks=1 00:17:11.238 --rc geninfo_unexecuted_blocks=1 00:17:11.238 00:17:11.238 ' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.238 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.238 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:11.239 Cannot find device "nvmf_init_br" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:11.239 Cannot find device "nvmf_init_br2" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:11.239 Cannot find device "nvmf_tgt_br" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.239 Cannot find device "nvmf_tgt_br2" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:11.239 Cannot find device "nvmf_init_br" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:11.239 Cannot find device "nvmf_init_br2" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:11.239 Cannot find device "nvmf_tgt_br" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:11.239 Cannot find device "nvmf_tgt_br2" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:11.239 Cannot find device "nvmf_br" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:11.239 Cannot find device "nvmf_init_if" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:11.239 Cannot find device "nvmf_init_if2" 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:11.239 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:11.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:17:11.498 00:17:11.498 --- 10.0.0.3 ping statistics --- 00:17:11.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.498 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:11.498 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:11.498 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:17:11.498 00:17:11.498 --- 10.0.0.4 ping statistics --- 00:17:11.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.498 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:11.498 00:17:11.498 --- 10.0.0.1 ping statistics --- 00:17:11.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.498 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:11.498 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:11.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:11.498 00:17:11.498 --- 10.0.0.2 ping statistics --- 00:17:11.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.499 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.499 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=87085 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 87085 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 87085 ']' 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.757 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.757 [2024-12-16 11:37:16.983032] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:11.757 [2024-12-16 11:37:16.983131] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.757 [2024-12-16 11:37:17.138171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.015 [2024-12-16 11:37:17.208936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.015 [2024-12-16 11:37:17.209028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.015 [2024-12-16 11:37:17.209044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.015 [2024-12-16 11:37:17.209054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.015 [2024-12-16 11:37:17.209063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.015 [2024-12-16 11:37:17.209519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.581 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.581 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:12.581 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.581 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.581 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.840 [2024-12-16 11:37:18.019072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.840 Malloc0 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.840 [2024-12-16 11:37:18.062154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=87138 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=87140 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=87143 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:12.840 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 87138 00:17:12.840 [2024-12-16 11:37:18.247000] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:12.840 [2024-12-16 11:37:18.247272] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:12.840 [2024-12-16 11:37:18.247501] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:14.216 Initializing NVMe Controllers 00:17:14.216 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:14.216 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:14.216 Initialization complete. Launching workers. 00:17:14.216 ======================================================== 00:17:14.216 Latency(us) 00:17:14.216 Device Information : IOPS MiB/s Average min max 00:17:14.216 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3398.00 13.27 293.99 162.67 530.78 00:17:14.216 ======================================================== 00:17:14.216 Total : 3398.00 13.27 293.99 162.67 530.78 00:17:14.216 00:17:14.216 Initializing NVMe Controllers 00:17:14.216 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:14.216 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:14.216 Initialization complete. Launching workers. 00:17:14.216 ======================================================== 00:17:14.216 Latency(us) 00:17:14.216 Device Information : IOPS MiB/s Average min max 00:17:14.216 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3390.00 13.24 294.67 229.13 533.21 00:17:14.216 ======================================================== 00:17:14.216 Total : 3390.00 13.24 294.67 229.13 533.21 00:17:14.216 00:17:14.216 Initializing NVMe Controllers 00:17:14.216 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:14.216 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:14.216 Initialization complete. Launching workers. 00:17:14.216 ======================================================== 00:17:14.216 Latency(us) 00:17:14.216 Device Information : IOPS MiB/s Average min max 00:17:14.216 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3384.81 13.22 295.16 244.42 940.03 00:17:14.216 ======================================================== 00:17:14.216 Total : 3384.81 13.22 295.16 244.42 940.03 00:17:14.216 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 87140 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 87143 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:14.216 rmmod nvme_tcp 00:17:14.216 rmmod nvme_fabrics 00:17:14.216 rmmod nvme_keyring 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 87085 ']' 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 87085 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 87085 ']' 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 87085 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87085 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.216 killing process with pid 87085 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87085' 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 87085 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 87085 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:14.216 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:14.474 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:14.474 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:14.474 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:14.474 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:14.474 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:14.474 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:14.474 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:14.475 00:17:14.475 real 0m3.619s 00:17:14.475 user 0m5.534s 00:17:14.475 sys 0m1.433s 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.475 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:14.475 ************************************ 00:17:14.475 END TEST nvmf_control_msg_list 00:17:14.475 ************************************ 00:17:14.734 11:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:14.734 11:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:14.734 11:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.734 11:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.734 ************************************ 00:17:14.734 START TEST nvmf_wait_for_buf 00:17:14.734 ************************************ 00:17:14.734 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:14.734 * Looking for test storage... 00:17:14.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:14.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.734 --rc genhtml_branch_coverage=1 00:17:14.734 --rc genhtml_function_coverage=1 00:17:14.734 --rc genhtml_legend=1 00:17:14.734 --rc geninfo_all_blocks=1 00:17:14.734 --rc geninfo_unexecuted_blocks=1 00:17:14.734 00:17:14.734 ' 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:14.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.734 --rc genhtml_branch_coverage=1 00:17:14.734 --rc genhtml_function_coverage=1 00:17:14.734 --rc genhtml_legend=1 00:17:14.734 --rc geninfo_all_blocks=1 00:17:14.734 --rc geninfo_unexecuted_blocks=1 00:17:14.734 00:17:14.734 ' 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:14.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.734 --rc genhtml_branch_coverage=1 00:17:14.734 --rc genhtml_function_coverage=1 00:17:14.734 --rc genhtml_legend=1 00:17:14.734 --rc geninfo_all_blocks=1 00:17:14.734 --rc geninfo_unexecuted_blocks=1 00:17:14.734 00:17:14.734 ' 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:14.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.734 --rc genhtml_branch_coverage=1 00:17:14.734 --rc genhtml_function_coverage=1 00:17:14.734 --rc genhtml_legend=1 00:17:14.734 --rc geninfo_all_blocks=1 00:17:14.734 --rc geninfo_unexecuted_blocks=1 00:17:14.734 00:17:14.734 ' 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.734 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.735 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:14.735 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:14.735 Cannot find device "nvmf_init_br" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:14.994 Cannot find device "nvmf_init_br2" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:14.994 Cannot find device "nvmf_tgt_br" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.994 Cannot find device "nvmf_tgt_br2" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:14.994 Cannot find device "nvmf_init_br" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:14.994 Cannot find device "nvmf_init_br2" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:14.994 Cannot find device "nvmf_tgt_br" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:14.994 Cannot find device "nvmf_tgt_br2" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:14.994 Cannot find device "nvmf_br" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:14.994 Cannot find device "nvmf_init_if" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:14.994 Cannot find device "nvmf_init_if2" 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:14.994 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:15.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:15.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:15.253 00:17:15.253 --- 10.0.0.3 ping statistics --- 00:17:15.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.253 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:15.253 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:15.253 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:17:15.253 00:17:15.253 --- 10.0.0.4 ping statistics --- 00:17:15.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.253 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:15.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:15.253 00:17:15.253 --- 10.0.0.1 ping statistics --- 00:17:15.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.253 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:15.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:15.253 00:17:15.253 --- 10.0.0.2 ping statistics --- 00:17:15.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.253 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=87381 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 87381 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 87381 ']' 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.253 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.254 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.254 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.254 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.254 [2024-12-16 11:37:20.595595] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:15.254 [2024-12-16 11:37:20.595674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.512 [2024-12-16 11:37:20.739194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.512 [2024-12-16 11:37:20.799904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.512 [2024-12-16 11:37:20.799982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.512 [2024-12-16 11:37:20.800009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.512 [2024-12-16 11:37:20.800018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.512 [2024-12-16 11:37:20.800039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.512 [2024-12-16 11:37:20.800449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.512 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.770 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.770 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:15.770 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.770 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.770 Malloc0 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.770 [2024-12-16 11:37:21.016381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.770 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.771 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:15.771 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.771 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.771 [2024-12-16 11:37:21.044484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:15.771 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.771 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:16.029 [2024-12-16 11:37:21.237126] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:17.403 Initializing NVMe Controllers 00:17:17.403 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:17.403 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:17.403 Initialization complete. Launching workers. 00:17:17.403 ======================================================== 00:17:17.403 Latency(us) 00:17:17.403 Device Information : IOPS MiB/s Average min max 00:17:17.403 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.87 16.11 32106.91 8024.86 62009.64 00:17:17.403 ======================================================== 00:17:17.403 Total : 128.87 16.11 32106.91 8024.86 62009.64 00:17:17.403 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:17.403 rmmod nvme_tcp 00:17:17.403 rmmod nvme_fabrics 00:17:17.403 rmmod nvme_keyring 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 87381 ']' 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 87381 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 87381 ']' 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 87381 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87381 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.403 killing process with pid 87381 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87381' 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 87381 00:17:17.403 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 87381 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:17.662 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:17.662 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:17.663 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.663 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:17.663 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:17.663 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:17.663 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:17.663 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:17.921 00:17:17.921 real 0m3.282s 00:17:17.921 user 0m2.667s 00:17:17.921 sys 0m0.734s 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:17.921 ************************************ 00:17:17.921 END TEST nvmf_wait_for_buf 00:17:17.921 ************************************ 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.921 ************************************ 00:17:17.921 START TEST nvmf_nsid 00:17:17.921 ************************************ 00:17:17.921 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:18.180 * Looking for test storage... 00:17:18.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:18.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.180 --rc genhtml_branch_coverage=1 00:17:18.180 --rc genhtml_function_coverage=1 00:17:18.180 --rc genhtml_legend=1 00:17:18.180 --rc geninfo_all_blocks=1 00:17:18.180 --rc geninfo_unexecuted_blocks=1 00:17:18.180 00:17:18.180 ' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:18.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.180 --rc genhtml_branch_coverage=1 00:17:18.180 --rc genhtml_function_coverage=1 00:17:18.180 --rc genhtml_legend=1 00:17:18.180 --rc geninfo_all_blocks=1 00:17:18.180 --rc geninfo_unexecuted_blocks=1 00:17:18.180 00:17:18.180 ' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:18.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.180 --rc genhtml_branch_coverage=1 00:17:18.180 --rc genhtml_function_coverage=1 00:17:18.180 --rc genhtml_legend=1 00:17:18.180 --rc geninfo_all_blocks=1 00:17:18.180 --rc geninfo_unexecuted_blocks=1 00:17:18.180 00:17:18.180 ' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:18.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.180 --rc genhtml_branch_coverage=1 00:17:18.180 --rc genhtml_function_coverage=1 00:17:18.180 --rc genhtml_legend=1 00:17:18.180 --rc geninfo_all_blocks=1 00:17:18.180 --rc geninfo_unexecuted_blocks=1 00:17:18.180 00:17:18.180 ' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.180 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:18.180 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:18.181 Cannot find device "nvmf_init_br" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:18.181 Cannot find device "nvmf_init_br2" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:18.181 Cannot find device "nvmf_tgt_br" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.181 Cannot find device "nvmf_tgt_br2" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:18.181 Cannot find device "nvmf_init_br" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:18.181 Cannot find device "nvmf_init_br2" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:18.181 Cannot find device "nvmf_tgt_br" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:18.181 Cannot find device "nvmf_tgt_br2" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:18.181 Cannot find device "nvmf_br" 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:18.181 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:18.439 Cannot find device "nvmf_init_if" 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:18.439 Cannot find device "nvmf_init_if2" 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:18.439 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:18.440 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:18.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:18.698 00:17:18.698 --- 10.0.0.3 ping statistics --- 00:17:18.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.698 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:18.698 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:18.698 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:17:18.698 00:17:18.698 --- 10.0.0.4 ping statistics --- 00:17:18.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.698 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:18.698 00:17:18.698 --- 10.0.0.1 ping statistics --- 00:17:18.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.698 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:18.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:17:18.698 00:17:18.698 --- 10.0.0.2 ping statistics --- 00:17:18.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.698 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=87647 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 87647 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87647 ']' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.698 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:18.698 [2024-12-16 11:37:23.984633] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:18.698 [2024-12-16 11:37:23.984759] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.957 [2024-12-16 11:37:24.135171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.957 [2024-12-16 11:37:24.199658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.957 [2024-12-16 11:37:24.199718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.957 [2024-12-16 11:37:24.199733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.957 [2024-12-16 11:37:24.199744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.957 [2024-12-16 11:37:24.199753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.957 [2024-12-16 11:37:24.200206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.957 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.957 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:18.957 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.957 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.957 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=87683 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.215 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=13f1ba3c-92cc-4ac4-942e-c5d4a5db7877 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=df00730a-5861-40ed-ae8b-d1d841c6c810 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f2ba5247-c86d-4948-be5a-0e18789626d5 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.216 null0 00:17:19.216 null1 00:17:19.216 null2 00:17:19.216 [2024-12-16 11:37:24.434665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.216 [2024-12-16 11:37:24.454524] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:19.216 [2024-12-16 11:37:24.454628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87683 ] 00:17:19.216 [2024-12-16 11:37:24.458809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 87683 /var/tmp/tgt2.sock 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87683 ']' 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.216 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.216 [2024-12-16 11:37:24.608479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.474 [2024-12-16 11:37:24.685032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.732 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.732 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:19.732 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:19.990 [2024-12-16 11:37:25.381071] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.990 [2024-12-16 11:37:25.397147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:20.249 nvme0n1 nvme0n2 00:17:20.249 nvme1n1 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:20.249 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 13f1ba3c-92cc-4ac4-942e-c5d4a5db7877 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=13f1ba3c92cc4ac4942ec5d4a5db7877 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 13F1BA3C92CC4AC4942EC5D4A5DB7877 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 13F1BA3C92CC4AC4942EC5D4A5DB7877 == \1\3\F\1\B\A\3\C\9\2\C\C\4\A\C\4\9\4\2\E\C\5\D\4\A\5\D\B\7\8\7\7 ]] 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid df00730a-5861-40ed-ae8b-d1d841c6c810 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=df00730a586140edae8bd1d841c6c810 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DF00730A586140EDAE8BD1D841C6C810 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DF00730A586140EDAE8BD1D841C6C810 == \D\F\0\0\7\3\0\A\5\8\6\1\4\0\E\D\A\E\8\B\D\1\D\8\4\1\C\6\C\8\1\0 ]] 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:21.623 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f2ba5247-c86d-4948-be5a-0e18789626d5 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f2ba5247c86d4948be5a0e18789626d5 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F2BA5247C86D4948BE5A0E18789626D5 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F2BA5247C86D4948BE5A0E18789626D5 == \F\2\B\A\5\2\4\7\C\8\6\D\4\9\4\8\B\E\5\A\0\E\1\8\7\8\9\6\2\6\D\5 ]] 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 87683 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87683 ']' 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87683 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.624 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87683 00:17:21.624 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:21.624 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:21.624 killing process with pid 87683 00:17:21.624 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87683' 00:17:21.624 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87683 00:17:21.624 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87683 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.190 rmmod nvme_tcp 00:17:22.190 rmmod nvme_fabrics 00:17:22.190 rmmod nvme_keyring 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 87647 ']' 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 87647 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87647 ']' 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87647 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87647 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.190 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.190 killing process with pid 87647 00:17:22.191 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87647' 00:17:22.191 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87647 00:17:22.191 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87647 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:22.448 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:22.707 00:17:22.707 real 0m4.720s 00:17:22.707 user 0m7.326s 00:17:22.707 sys 0m1.376s 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.707 ************************************ 00:17:22.707 END TEST nvmf_nsid 00:17:22.707 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:22.707 ************************************ 00:17:22.707 11:37:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:22.707 00:17:22.707 real 7m25.852s 00:17:22.707 user 17m58.046s 00:17:22.707 sys 1m26.147s 00:17:22.707 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.707 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.707 ************************************ 00:17:22.707 END TEST nvmf_target_extra 00:17:22.707 ************************************ 00:17:22.707 11:37:28 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:22.707 11:37:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.707 11:37:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.707 11:37:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.707 ************************************ 00:17:22.707 START TEST nvmf_host 00:17:22.707 ************************************ 00:17:22.707 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:22.964 * Looking for test storage... 00:17:22.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:22.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.965 --rc genhtml_branch_coverage=1 00:17:22.965 --rc genhtml_function_coverage=1 00:17:22.965 --rc genhtml_legend=1 00:17:22.965 --rc geninfo_all_blocks=1 00:17:22.965 --rc geninfo_unexecuted_blocks=1 00:17:22.965 00:17:22.965 ' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:22.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.965 --rc genhtml_branch_coverage=1 00:17:22.965 --rc genhtml_function_coverage=1 00:17:22.965 --rc genhtml_legend=1 00:17:22.965 --rc geninfo_all_blocks=1 00:17:22.965 --rc geninfo_unexecuted_blocks=1 00:17:22.965 00:17:22.965 ' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:22.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.965 --rc genhtml_branch_coverage=1 00:17:22.965 --rc genhtml_function_coverage=1 00:17:22.965 --rc genhtml_legend=1 00:17:22.965 --rc geninfo_all_blocks=1 00:17:22.965 --rc geninfo_unexecuted_blocks=1 00:17:22.965 00:17:22.965 ' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:22.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.965 --rc genhtml_branch_coverage=1 00:17:22.965 --rc genhtml_function_coverage=1 00:17:22.965 --rc genhtml_legend=1 00:17:22.965 --rc geninfo_all_blocks=1 00:17:22.965 --rc geninfo_unexecuted_blocks=1 00:17:22.965 00:17:22.965 ' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.965 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.965 ************************************ 00:17:22.965 START TEST nvmf_multicontroller 00:17:22.965 ************************************ 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:22.965 * Looking for test storage... 00:17:22.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.965 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.224 --rc genhtml_branch_coverage=1 00:17:23.224 --rc genhtml_function_coverage=1 00:17:23.224 --rc genhtml_legend=1 00:17:23.224 --rc geninfo_all_blocks=1 00:17:23.224 --rc geninfo_unexecuted_blocks=1 00:17:23.224 00:17:23.224 ' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.224 --rc genhtml_branch_coverage=1 00:17:23.224 --rc genhtml_function_coverage=1 00:17:23.224 --rc genhtml_legend=1 00:17:23.224 --rc geninfo_all_blocks=1 00:17:23.224 --rc geninfo_unexecuted_blocks=1 00:17:23.224 00:17:23.224 ' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.224 --rc genhtml_branch_coverage=1 00:17:23.224 --rc genhtml_function_coverage=1 00:17:23.224 --rc genhtml_legend=1 00:17:23.224 --rc geninfo_all_blocks=1 00:17:23.224 --rc geninfo_unexecuted_blocks=1 00:17:23.224 00:17:23.224 ' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.224 --rc genhtml_branch_coverage=1 00:17:23.224 --rc genhtml_function_coverage=1 00:17:23.224 --rc genhtml_legend=1 00:17:23.224 --rc geninfo_all_blocks=1 00:17:23.224 --rc geninfo_unexecuted_blocks=1 00:17:23.224 00:17:23.224 ' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.224 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.225 Cannot find device "nvmf_init_br" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.225 Cannot find device "nvmf_init_br2" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.225 Cannot find device "nvmf_tgt_br" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.225 Cannot find device "nvmf_tgt_br2" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.225 Cannot find device "nvmf_init_br" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.225 Cannot find device "nvmf_init_br2" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.225 Cannot find device "nvmf_tgt_br" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.225 Cannot find device "nvmf_tgt_br2" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.225 Cannot find device "nvmf_br" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.225 Cannot find device "nvmf_init_if" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:23.225 Cannot find device "nvmf_init_if2" 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:17:23.225 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:23.483 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:23.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.384 ms 00:17:23.484 00:17:23.484 --- 10.0.0.3 ping statistics --- 00:17:23.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.484 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:23.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:23.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:23.484 00:17:23.484 --- 10.0.0.4 ping statistics --- 00:17:23.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.484 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:23.484 00:17:23.484 --- 10.0.0.1 ping statistics --- 00:17:23.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.484 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:23.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:23.484 00:17:23.484 --- 10.0.0.2 ping statistics --- 00:17:23.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.484 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.484 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=88054 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 88054 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 88054 ']' 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.742 11:37:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:23.742 [2024-12-16 11:37:28.970410] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:23.742 [2024-12-16 11:37:28.971243] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.742 [2024-12-16 11:37:29.126606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:24.000 [2024-12-16 11:37:29.192202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.000 [2024-12-16 11:37:29.192263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.000 [2024-12-16 11:37:29.192287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.000 [2024-12-16 11:37:29.192296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.000 [2024-12-16 11:37:29.192306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.000 [2024-12-16 11:37:29.193499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.000 [2024-12-16 11:37:29.193661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.000 [2024-12-16 11:37:29.193678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.000 [2024-12-16 11:37:29.399124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.000 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 Malloc0 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 [2024-12-16 11:37:29.464707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 [2024-12-16 11:37:29.472595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 Malloc1 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=88094 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 88094 /var/tmp/bdevperf.sock 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 88094 ']' 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.259 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.826 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.826 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:24.826 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:24.826 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.826 11:37:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.826 NVMe0n1 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.826 1 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.826 2024/12/16 11:37:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:24.826 request: 00:17:24.826 { 00:17:24.826 "method": "bdev_nvme_attach_controller", 00:17:24.826 "params": { 00:17:24.826 "name": "NVMe0", 00:17:24.826 "trtype": "tcp", 00:17:24.826 "traddr": "10.0.0.3", 00:17:24.826 "adrfam": "ipv4", 00:17:24.826 "trsvcid": "4420", 00:17:24.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.826 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:24.826 "hostaddr": "10.0.0.1", 00:17:24.826 "prchk_reftag": false, 00:17:24.826 "prchk_guard": false, 00:17:24.826 "hdgst": false, 00:17:24.826 "ddgst": false, 00:17:24.826 "allow_unrecognized_csi": false 00:17:24.826 } 00:17:24.826 } 00:17:24.826 Got JSON-RPC error response 00:17:24.826 GoRPCClient: error on JSON-RPC call 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.826 2024/12/16 11:37:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:24.826 request: 00:17:24.826 { 00:17:24.826 "method": "bdev_nvme_attach_controller", 00:17:24.826 "params": { 00:17:24.826 "name": "NVMe0", 00:17:24.826 "trtype": "tcp", 00:17:24.826 "traddr": "10.0.0.3", 00:17:24.826 "adrfam": "ipv4", 00:17:24.826 "trsvcid": "4420", 00:17:24.826 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:24.826 "hostaddr": "10.0.0.1", 00:17:24.826 "prchk_reftag": false, 00:17:24.826 "prchk_guard": false, 00:17:24.826 "hdgst": false, 00:17:24.826 "ddgst": false, 00:17:24.826 "allow_unrecognized_csi": false 00:17:24.826 } 00:17:24.826 } 00:17:24.826 Got JSON-RPC error response 00:17:24.826 GoRPCClient: error on JSON-RPC call 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.826 2024/12/16 11:37:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:24.826 request: 00:17:24.826 { 00:17:24.826 "method": "bdev_nvme_attach_controller", 00:17:24.826 "params": { 00:17:24.826 "name": "NVMe0", 00:17:24.826 "trtype": "tcp", 00:17:24.826 "traddr": "10.0.0.3", 00:17:24.826 "adrfam": "ipv4", 00:17:24.826 "trsvcid": "4420", 00:17:24.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.826 "hostaddr": "10.0.0.1", 00:17:24.826 "prchk_reftag": false, 00:17:24.826 "prchk_guard": false, 00:17:24.826 "hdgst": false, 00:17:24.826 "ddgst": false, 00:17:24.826 "multipath": "disable", 00:17:24.826 "allow_unrecognized_csi": false 00:17:24.826 } 00:17:24.826 } 00:17:24.826 Got JSON-RPC error response 00:17:24.826 GoRPCClient: error on JSON-RPC call 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.826 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.827 2024/12/16 11:37:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:24.827 request: 00:17:24.827 { 00:17:24.827 "method": "bdev_nvme_attach_controller", 00:17:24.827 "params": { 00:17:24.827 "name": "NVMe0", 00:17:24.827 "trtype": "tcp", 00:17:24.827 "traddr": "10.0.0.3", 00:17:24.827 "adrfam": "ipv4", 00:17:24.827 "trsvcid": "4420", 00:17:24.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.827 "hostaddr": "10.0.0.1", 00:17:24.827 "prchk_reftag": false, 00:17:24.827 "prchk_guard": false, 00:17:24.827 "hdgst": false, 00:17:24.827 "ddgst": false, 00:17:24.827 "multipath": "failover", 00:17:24.827 "allow_unrecognized_csi": false 00:17:24.827 } 00:17:24.827 } 00:17:24.827 Got JSON-RPC error response 00:17:24.827 GoRPCClient: error on JSON-RPC call 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.827 NVMe0n1 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.827 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.085 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:25.085 11:37:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:26.458 { 00:17:26.458 "results": [ 00:17:26.458 { 00:17:26.458 "job": "NVMe0n1", 00:17:26.458 "core_mask": "0x1", 00:17:26.458 "workload": "write", 00:17:26.458 "status": "finished", 00:17:26.458 "queue_depth": 128, 00:17:26.458 "io_size": 4096, 00:17:26.458 "runtime": 1.008734, 00:17:26.458 "iops": 19584.94508958754, 00:17:26.458 "mibps": 76.50369175620133, 00:17:26.458 "io_failed": 0, 00:17:26.458 "io_timeout": 0, 00:17:26.458 "avg_latency_us": 6524.760100866941, 00:17:26.458 "min_latency_us": 2338.4436363636364, 00:17:26.458 "max_latency_us": 13702.981818181817 00:17:26.458 } 00:17:26.458 ], 00:17:26.458 "core_count": 1 00:17:26.458 } 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.458 nvme1n1 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.458 nvme1n1 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:17:26.458 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 88094 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 88094 ']' 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 88094 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88094 00:17:26.459 killing process with pid 88094 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88094' 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 88094 00:17:26.459 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 88094 00:17:26.719 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.719 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.719 11:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:17:26.719 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:26.719 [2024-12-16 11:37:29.598435] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:26.719 [2024-12-16 11:37:29.599193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88094 ] 00:17:26.719 [2024-12-16 11:37:29.758030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.719 [2024-12-16 11:37:29.820718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.719 [2024-12-16 11:37:30.262262] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name c5d08d06-6ca4-4bb7-92c8-31f102d50801 already exists 00:17:26.719 [2024-12-16 11:37:30.262324] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:c5d08d06-6ca4-4bb7-92c8-31f102d50801 alias for bdev NVMe1n1 00:17:26.719 [2024-12-16 11:37:30.262345] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:26.719 Running I/O for 1 seconds... 00:17:26.719 19563.00 IOPS, 76.42 MiB/s 00:17:26.719 Latency(us) 00:17:26.719 [2024-12-16T11:37:32.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.719 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:26.719 NVMe0n1 : 1.01 19584.95 76.50 0.00 0.00 6524.76 2338.44 13702.98 00:17:26.719 [2024-12-16T11:37:32.133Z] =================================================================================================================== 00:17:26.719 [2024-12-16T11:37:32.133Z] Total : 19584.95 76.50 0.00 0.00 6524.76 2338.44 13702.98 00:17:26.719 Received shutdown signal, test time was about 1.000000 seconds 00:17:26.719 00:17:26.719 Latency(us) 00:17:26.719 [2024-12-16T11:37:32.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.719 [2024-12-16T11:37:32.133Z] =================================================================================================================== 00:17:26.719 [2024-12-16T11:37:32.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.719 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.719 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.719 rmmod nvme_tcp 00:17:26.719 rmmod nvme_fabrics 00:17:26.719 rmmod nvme_keyring 00:17:26.981 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 88054 ']' 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 88054 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 88054 ']' 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 88054 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88054 00:17:26.982 killing process with pid 88054 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88054' 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 88054 00:17:26.982 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 88054 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:27.240 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.241 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.241 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:17:27.499 00:17:27.499 real 0m4.374s 00:17:27.499 user 0m12.510s 00:17:27.499 sys 0m1.172s 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.499 ************************************ 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.499 END TEST nvmf_multicontroller 00:17:27.499 ************************************ 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.499 ************************************ 00:17:27.499 START TEST nvmf_aer 00:17:27.499 ************************************ 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:27.499 * Looking for test storage... 00:17:27.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.499 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.500 --rc genhtml_branch_coverage=1 00:17:27.500 --rc genhtml_function_coverage=1 00:17:27.500 --rc genhtml_legend=1 00:17:27.500 --rc geninfo_all_blocks=1 00:17:27.500 --rc geninfo_unexecuted_blocks=1 00:17:27.500 00:17:27.500 ' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.500 --rc genhtml_branch_coverage=1 00:17:27.500 --rc genhtml_function_coverage=1 00:17:27.500 --rc genhtml_legend=1 00:17:27.500 --rc geninfo_all_blocks=1 00:17:27.500 --rc geninfo_unexecuted_blocks=1 00:17:27.500 00:17:27.500 ' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.500 --rc genhtml_branch_coverage=1 00:17:27.500 --rc genhtml_function_coverage=1 00:17:27.500 --rc genhtml_legend=1 00:17:27.500 --rc geninfo_all_blocks=1 00:17:27.500 --rc geninfo_unexecuted_blocks=1 00:17:27.500 00:17:27.500 ' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:27.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.500 --rc genhtml_branch_coverage=1 00:17:27.500 --rc genhtml_function_coverage=1 00:17:27.500 --rc genhtml_legend=1 00:17:27.500 --rc geninfo_all_blocks=1 00:17:27.500 --rc geninfo_unexecuted_blocks=1 00:17:27.500 00:17:27.500 ' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.500 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.500 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:27.759 Cannot find device "nvmf_init_br" 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:27.759 Cannot find device "nvmf_init_br2" 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:27.759 Cannot find device "nvmf_tgt_br" 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.759 Cannot find device "nvmf_tgt_br2" 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:27.759 Cannot find device "nvmf_init_br" 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:27.759 Cannot find device "nvmf_init_br2" 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:17:27.759 11:37:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:27.759 Cannot find device "nvmf_tgt_br" 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:27.759 Cannot find device "nvmf_tgt_br2" 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:27.759 Cannot find device "nvmf_br" 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:27.759 Cannot find device "nvmf_init_if" 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:27.759 Cannot find device "nvmf_init_if2" 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:17:27.759 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:27.760 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:28.018 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.018 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:28.018 00:17:28.018 --- 10.0.0.3 ping statistics --- 00:17:28.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.018 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:28.018 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:28.019 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:28.019 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:28.019 00:17:28.019 --- 10.0.0.4 ping statistics --- 00:17:28.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.019 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:28.019 00:17:28.019 --- 10.0.0.1 ping statistics --- 00:17:28.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.019 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:28.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:28.019 00:17:28.019 --- 10.0.0.2 ping statistics --- 00:17:28.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.019 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=88393 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 88393 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 88393 ']' 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.019 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.019 [2024-12-16 11:37:33.391815] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:28.019 [2024-12-16 11:37:33.392190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.277 [2024-12-16 11:37:33.546270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.277 [2024-12-16 11:37:33.611858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.277 [2024-12-16 11:37:33.612330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.277 [2024-12-16 11:37:33.612614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.277 [2024-12-16 11:37:33.612969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.277 [2024-12-16 11:37:33.613234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.277 [2024-12-16 11:37:33.614736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.277 [2024-12-16 11:37:33.614838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.277 [2024-12-16 11:37:33.614915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.277 [2024-12-16 11:37:33.614916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 [2024-12-16 11:37:33.794209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 Malloc0 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 [2024-12-16 11:37:33.857787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 [ 00:17:28.536 { 00:17:28.536 "allow_any_host": true, 00:17:28.536 "hosts": [], 00:17:28.536 "listen_addresses": [], 00:17:28.536 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:28.536 "subtype": "Discovery" 00:17:28.536 }, 00:17:28.536 { 00:17:28.536 "allow_any_host": true, 00:17:28.536 "hosts": [], 00:17:28.536 "listen_addresses": [ 00:17:28.536 { 00:17:28.536 "adrfam": "IPv4", 00:17:28.536 "traddr": "10.0.0.3", 00:17:28.536 "trsvcid": "4420", 00:17:28.536 "trtype": "TCP" 00:17:28.536 } 00:17:28.536 ], 00:17:28.536 "max_cntlid": 65519, 00:17:28.536 "max_namespaces": 2, 00:17:28.536 "min_cntlid": 1, 00:17:28.536 "model_number": "SPDK bdev Controller", 00:17:28.536 "namespaces": [ 00:17:28.536 { 00:17:28.536 "bdev_name": "Malloc0", 00:17:28.536 "name": "Malloc0", 00:17:28.536 "nguid": "60EDD8768D62435F80BADBF21765289B", 00:17:28.536 "nsid": 1, 00:17:28.536 "uuid": "60edd876-8d62-435f-80ba-dbf21765289b" 00:17:28.536 } 00:17:28.536 ], 00:17:28.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.536 "serial_number": "SPDK00000000000001", 00:17:28.536 "subtype": "NVMe" 00:17:28.536 } 00:17:28.536 ] 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=88433 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:17:28.536 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:28.794 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:28.794 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:28.794 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:17:28.794 11:37:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.794 Malloc1 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.794 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.794 Asynchronous Event Request test 00:17:28.795 Attaching to 10.0.0.3 00:17:28.795 Attached to 10.0.0.3 00:17:28.795 Registering asynchronous event callbacks... 00:17:28.795 Starting namespace attribute notice tests for all controllers... 00:17:28.795 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:28.795 aer_cb - Changed Namespace 00:17:28.795 Cleaning up... 00:17:28.795 [ 00:17:28.795 { 00:17:28.795 "allow_any_host": true, 00:17:28.795 "hosts": [], 00:17:28.795 "listen_addresses": [], 00:17:28.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:28.795 "subtype": "Discovery" 00:17:28.795 }, 00:17:28.795 { 00:17:28.795 "allow_any_host": true, 00:17:28.795 "hosts": [], 00:17:28.795 "listen_addresses": [ 00:17:28.795 { 00:17:28.795 "adrfam": "IPv4", 00:17:28.795 "traddr": "10.0.0.3", 00:17:28.795 "trsvcid": "4420", 00:17:28.795 "trtype": "TCP" 00:17:28.795 } 00:17:28.795 ], 00:17:28.795 "max_cntlid": 65519, 00:17:28.795 "max_namespaces": 2, 00:17:28.795 "min_cntlid": 1, 00:17:28.795 "model_number": "SPDK bdev Controller", 00:17:28.795 "namespaces": [ 00:17:28.795 { 00:17:28.795 "bdev_name": "Malloc0", 00:17:28.795 "name": "Malloc0", 00:17:28.795 "nguid": "60EDD8768D62435F80BADBF21765289B", 00:17:28.795 "nsid": 1, 00:17:28.795 "uuid": "60edd876-8d62-435f-80ba-dbf21765289b" 00:17:28.795 }, 00:17:28.795 { 00:17:28.795 "bdev_name": "Malloc1", 00:17:28.795 "name": "Malloc1", 00:17:28.795 "nguid": "41F2F0C8B4E84E888EBF589CAA2B7BAE", 00:17:28.795 "nsid": 2, 00:17:28.795 "uuid": "41f2f0c8-b4e8-4e88-8ebf-589caa2b7bae" 00:17:28.795 } 00:17:28.795 ], 00:17:28.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.795 "serial_number": "SPDK00000000000001", 00:17:28.795 "subtype": "NVMe" 00:17:28.795 } 00:17:28.795 ] 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 88433 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:28.795 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.053 rmmod nvme_tcp 00:17:29.053 rmmod nvme_fabrics 00:17:29.053 rmmod nvme_keyring 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 88393 ']' 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 88393 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 88393 ']' 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 88393 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88393 00:17:29.053 killing process with pid 88393 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88393' 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 88393 00:17:29.053 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 88393 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:29.311 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:29.312 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:17:29.570 00:17:29.570 real 0m2.152s 00:17:29.570 user 0m4.147s 00:17:29.570 sys 0m0.742s 00:17:29.570 ************************************ 00:17:29.570 END TEST nvmf_aer 00:17:29.570 ************************************ 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 ************************************ 00:17:29.570 START TEST nvmf_async_init 00:17:29.570 ************************************ 00:17:29.570 11:37:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:29.829 * Looking for test storage... 00:17:29.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.829 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:29.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.829 --rc genhtml_branch_coverage=1 00:17:29.829 --rc genhtml_function_coverage=1 00:17:29.830 --rc genhtml_legend=1 00:17:29.830 --rc geninfo_all_blocks=1 00:17:29.830 --rc geninfo_unexecuted_blocks=1 00:17:29.830 00:17:29.830 ' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:29.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.830 --rc genhtml_branch_coverage=1 00:17:29.830 --rc genhtml_function_coverage=1 00:17:29.830 --rc genhtml_legend=1 00:17:29.830 --rc geninfo_all_blocks=1 00:17:29.830 --rc geninfo_unexecuted_blocks=1 00:17:29.830 00:17:29.830 ' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:29.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.830 --rc genhtml_branch_coverage=1 00:17:29.830 --rc genhtml_function_coverage=1 00:17:29.830 --rc genhtml_legend=1 00:17:29.830 --rc geninfo_all_blocks=1 00:17:29.830 --rc geninfo_unexecuted_blocks=1 00:17:29.830 00:17:29.830 ' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:29.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.830 --rc genhtml_branch_coverage=1 00:17:29.830 --rc genhtml_function_coverage=1 00:17:29.830 --rc genhtml_legend=1 00:17:29.830 --rc geninfo_all_blocks=1 00:17:29.830 --rc geninfo_unexecuted_blocks=1 00:17:29.830 00:17:29.830 ' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.830 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c8ec16b59fa74c3f9ade195a572918fe 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:29.830 Cannot find device "nvmf_init_br" 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:29.830 Cannot find device "nvmf_init_br2" 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:29.830 Cannot find device "nvmf_tgt_br" 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.830 Cannot find device "nvmf_tgt_br2" 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:29.830 Cannot find device "nvmf_init_br" 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:29.830 Cannot find device "nvmf_init_br2" 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:17:29.830 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:30.088 Cannot find device "nvmf_tgt_br" 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:30.088 Cannot find device "nvmf_tgt_br2" 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:30.088 Cannot find device "nvmf_br" 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:30.088 Cannot find device "nvmf_init_if" 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:30.088 Cannot find device "nvmf_init_if2" 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:30.088 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:30.089 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:30.089 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:30.089 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:30.089 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.089 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:30.346 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:30.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:30.346 00:17:30.346 --- 10.0.0.3 ping statistics --- 00:17:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.346 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:30.346 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:30.346 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:30.346 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:17:30.346 00:17:30.346 --- 10.0.0.4 ping statistics --- 00:17:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.346 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:30.346 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:30.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:30.346 00:17:30.346 --- 10.0.0.1 ping statistics --- 00:17:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.346 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:30.346 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:30.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:17:30.346 00:17:30.346 --- 10.0.0.2 ping statistics --- 00:17:30.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.346 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=88658 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 88658 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 88658 ']' 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.347 11:37:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 [2024-12-16 11:37:35.624734] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:30.347 [2024-12-16 11:37:35.624852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.603 [2024-12-16 11:37:35.773854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.603 [2024-12-16 11:37:35.827387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.603 [2024-12-16 11:37:35.827464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.604 [2024-12-16 11:37:35.827477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.604 [2024-12-16 11:37:35.827485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.604 [2024-12-16 11:37:35.827492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.604 [2024-12-16 11:37:35.827874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.535 [2024-12-16 11:37:36.706893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.535 null0 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c8ec16b59fa74c3f9ade195a572918fe 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.535 [2024-12-16 11:37:36.747052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.535 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 nvme0n1 00:17:31.793 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.793 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:31.793 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.793 11:37:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 [ 00:17:31.793 { 00:17:31.793 "aliases": [ 00:17:31.793 "c8ec16b5-9fa7-4c3f-9ade-195a572918fe" 00:17:31.793 ], 00:17:31.793 "assigned_rate_limits": { 00:17:31.793 "r_mbytes_per_sec": 0, 00:17:31.793 "rw_ios_per_sec": 0, 00:17:31.793 "rw_mbytes_per_sec": 0, 00:17:31.793 "w_mbytes_per_sec": 0 00:17:31.793 }, 00:17:31.793 "block_size": 512, 00:17:31.793 "claimed": false, 00:17:31.793 "driver_specific": { 00:17:31.793 "mp_policy": "active_passive", 00:17:31.793 "nvme": [ 00:17:31.793 { 00:17:31.793 "ctrlr_data": { 00:17:31.793 "ana_reporting": false, 00:17:31.793 "cntlid": 1, 00:17:31.793 "firmware_revision": "25.01", 00:17:31.794 "model_number": "SPDK bdev Controller", 00:17:31.794 "multi_ctrlr": true, 00:17:31.794 "oacs": { 00:17:31.794 "firmware": 0, 00:17:31.794 "format": 0, 00:17:31.794 "ns_manage": 0, 00:17:31.794 "security": 0 00:17:31.794 }, 00:17:31.794 "serial_number": "00000000000000000000", 00:17:31.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.794 "vendor_id": "0x8086" 00:17:31.794 }, 00:17:31.794 "ns_data": { 00:17:31.794 "can_share": true, 00:17:31.794 "id": 1 00:17:31.794 }, 00:17:31.794 "trid": { 00:17:31.794 "adrfam": "IPv4", 00:17:31.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.794 "traddr": "10.0.0.3", 00:17:31.794 "trsvcid": "4420", 00:17:31.794 "trtype": "TCP" 00:17:31.794 }, 00:17:31.794 "vs": { 00:17:31.794 "nvme_version": "1.3" 00:17:31.794 } 00:17:31.794 } 00:17:31.794 ] 00:17:31.794 }, 00:17:31.794 "memory_domains": [ 00:17:31.794 { 00:17:31.794 "dma_device_id": "system", 00:17:31.794 "dma_device_type": 1 00:17:31.794 } 00:17:31.794 ], 00:17:31.794 "name": "nvme0n1", 00:17:31.794 "num_blocks": 2097152, 00:17:31.794 "numa_id": -1, 00:17:31.794 "product_name": "NVMe disk", 00:17:31.794 "supported_io_types": { 00:17:31.794 "abort": true, 00:17:31.794 "compare": true, 00:17:31.794 "compare_and_write": true, 00:17:31.794 "copy": true, 00:17:31.794 "flush": true, 00:17:31.794 "get_zone_info": false, 00:17:31.794 "nvme_admin": true, 00:17:31.794 "nvme_io": true, 00:17:31.794 "nvme_io_md": false, 00:17:31.794 "nvme_iov_md": false, 00:17:31.794 "read": true, 00:17:31.794 "reset": true, 00:17:31.794 "seek_data": false, 00:17:31.794 "seek_hole": false, 00:17:31.794 "unmap": false, 00:17:31.794 "write": true, 00:17:31.794 "write_zeroes": true, 00:17:31.794 "zcopy": false, 00:17:31.794 "zone_append": false, 00:17:31.794 "zone_management": false 00:17:31.794 }, 00:17:31.794 "uuid": "c8ec16b5-9fa7-4c3f-9ade-195a572918fe", 00:17:31.794 "zoned": false 00:17:31.794 } 00:17:31.794 ] 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.794 [2024-12-16 11:37:37.016188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:31.794 [2024-12-16 11:37:37.016312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7d130 (9): Bad file descriptor 00:17:31.794 [2024-12-16 11:37:37.148255] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.794 [ 00:17:31.794 { 00:17:31.794 "aliases": [ 00:17:31.794 "c8ec16b5-9fa7-4c3f-9ade-195a572918fe" 00:17:31.794 ], 00:17:31.794 "assigned_rate_limits": { 00:17:31.794 "r_mbytes_per_sec": 0, 00:17:31.794 "rw_ios_per_sec": 0, 00:17:31.794 "rw_mbytes_per_sec": 0, 00:17:31.794 "w_mbytes_per_sec": 0 00:17:31.794 }, 00:17:31.794 "block_size": 512, 00:17:31.794 "claimed": false, 00:17:31.794 "driver_specific": { 00:17:31.794 "mp_policy": "active_passive", 00:17:31.794 "nvme": [ 00:17:31.794 { 00:17:31.794 "ctrlr_data": { 00:17:31.794 "ana_reporting": false, 00:17:31.794 "cntlid": 2, 00:17:31.794 "firmware_revision": "25.01", 00:17:31.794 "model_number": "SPDK bdev Controller", 00:17:31.794 "multi_ctrlr": true, 00:17:31.794 "oacs": { 00:17:31.794 "firmware": 0, 00:17:31.794 "format": 0, 00:17:31.794 "ns_manage": 0, 00:17:31.794 "security": 0 00:17:31.794 }, 00:17:31.794 "serial_number": "00000000000000000000", 00:17:31.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.794 "vendor_id": "0x8086" 00:17:31.794 }, 00:17:31.794 "ns_data": { 00:17:31.794 "can_share": true, 00:17:31.794 "id": 1 00:17:31.794 }, 00:17:31.794 "trid": { 00:17:31.794 "adrfam": "IPv4", 00:17:31.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.794 "traddr": "10.0.0.3", 00:17:31.794 "trsvcid": "4420", 00:17:31.794 "trtype": "TCP" 00:17:31.794 }, 00:17:31.794 "vs": { 00:17:31.794 "nvme_version": "1.3" 00:17:31.794 } 00:17:31.794 } 00:17:31.794 ] 00:17:31.794 }, 00:17:31.794 "memory_domains": [ 00:17:31.794 { 00:17:31.794 "dma_device_id": "system", 00:17:31.794 "dma_device_type": 1 00:17:31.794 } 00:17:31.794 ], 00:17:31.794 "name": "nvme0n1", 00:17:31.794 "num_blocks": 2097152, 00:17:31.794 "numa_id": -1, 00:17:31.794 "product_name": "NVMe disk", 00:17:31.794 "supported_io_types": { 00:17:31.794 "abort": true, 00:17:31.794 "compare": true, 00:17:31.794 "compare_and_write": true, 00:17:31.794 "copy": true, 00:17:31.794 "flush": true, 00:17:31.794 "get_zone_info": false, 00:17:31.794 "nvme_admin": true, 00:17:31.794 "nvme_io": true, 00:17:31.794 "nvme_io_md": false, 00:17:31.794 "nvme_iov_md": false, 00:17:31.794 "read": true, 00:17:31.794 "reset": true, 00:17:31.794 "seek_data": false, 00:17:31.794 "seek_hole": false, 00:17:31.794 "unmap": false, 00:17:31.794 "write": true, 00:17:31.794 "write_zeroes": true, 00:17:31.794 "zcopy": false, 00:17:31.794 "zone_append": false, 00:17:31.794 "zone_management": false 00:17:31.794 }, 00:17:31.794 "uuid": "c8ec16b5-9fa7-4c3f-9ade-195a572918fe", 00:17:31.794 "zoned": false 00:17:31.794 } 00:17:31.794 ] 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jGHjHKlUAE 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jGHjHKlUAE 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.jGHjHKlUAE 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.794 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.052 [2024-12-16 11:37:37.224413] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.052 [2024-12-16 11:37:37.224595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.052 [2024-12-16 11:37:37.240440] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.052 nvme0n1 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.052 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.052 [ 00:17:32.052 { 00:17:32.052 "aliases": [ 00:17:32.052 "c8ec16b5-9fa7-4c3f-9ade-195a572918fe" 00:17:32.052 ], 00:17:32.052 "assigned_rate_limits": { 00:17:32.052 "r_mbytes_per_sec": 0, 00:17:32.052 "rw_ios_per_sec": 0, 00:17:32.052 "rw_mbytes_per_sec": 0, 00:17:32.052 "w_mbytes_per_sec": 0 00:17:32.052 }, 00:17:32.052 "block_size": 512, 00:17:32.052 "claimed": false, 00:17:32.052 "driver_specific": { 00:17:32.052 "mp_policy": "active_passive", 00:17:32.052 "nvme": [ 00:17:32.052 { 00:17:32.052 "ctrlr_data": { 00:17:32.052 "ana_reporting": false, 00:17:32.052 "cntlid": 3, 00:17:32.052 "firmware_revision": "25.01", 00:17:32.052 "model_number": "SPDK bdev Controller", 00:17:32.052 "multi_ctrlr": true, 00:17:32.052 "oacs": { 00:17:32.052 "firmware": 0, 00:17:32.052 "format": 0, 00:17:32.052 "ns_manage": 0, 00:17:32.052 "security": 0 00:17:32.052 }, 00:17:32.052 "serial_number": "00000000000000000000", 00:17:32.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:32.052 "vendor_id": "0x8086" 00:17:32.052 }, 00:17:32.052 "ns_data": { 00:17:32.052 "can_share": true, 00:17:32.052 "id": 1 00:17:32.052 }, 00:17:32.052 "trid": { 00:17:32.052 "adrfam": "IPv4", 00:17:32.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:32.053 "traddr": "10.0.0.3", 00:17:32.053 "trsvcid": "4421", 00:17:32.053 "trtype": "TCP" 00:17:32.053 }, 00:17:32.053 "vs": { 00:17:32.053 "nvme_version": "1.3" 00:17:32.053 } 00:17:32.053 } 00:17:32.053 ] 00:17:32.053 }, 00:17:32.053 "memory_domains": [ 00:17:32.053 { 00:17:32.053 "dma_device_id": "system", 00:17:32.053 "dma_device_type": 1 00:17:32.053 } 00:17:32.053 ], 00:17:32.053 "name": "nvme0n1", 00:17:32.053 "num_blocks": 2097152, 00:17:32.053 "numa_id": -1, 00:17:32.053 "product_name": "NVMe disk", 00:17:32.053 "supported_io_types": { 00:17:32.053 "abort": true, 00:17:32.053 "compare": true, 00:17:32.053 "compare_and_write": true, 00:17:32.053 "copy": true, 00:17:32.053 "flush": true, 00:17:32.053 "get_zone_info": false, 00:17:32.053 "nvme_admin": true, 00:17:32.053 "nvme_io": true, 00:17:32.053 "nvme_io_md": false, 00:17:32.053 "nvme_iov_md": false, 00:17:32.053 "read": true, 00:17:32.053 "reset": true, 00:17:32.053 "seek_data": false, 00:17:32.053 "seek_hole": false, 00:17:32.053 "unmap": false, 00:17:32.053 "write": true, 00:17:32.053 "write_zeroes": true, 00:17:32.053 "zcopy": false, 00:17:32.053 "zone_append": false, 00:17:32.053 "zone_management": false 00:17:32.053 }, 00:17:32.053 "uuid": "c8ec16b5-9fa7-4c3f-9ade-195a572918fe", 00:17:32.053 "zoned": false 00:17:32.053 } 00:17:32.053 ] 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.jGHjHKlUAE 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.053 rmmod nvme_tcp 00:17:32.053 rmmod nvme_fabrics 00:17:32.053 rmmod nvme_keyring 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 88658 ']' 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 88658 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 88658 ']' 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 88658 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:17:32.053 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88658 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.311 killing process with pid 88658 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88658' 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 88658 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 88658 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:32.311 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:17:32.569 ************************************ 00:17:32.569 END TEST nvmf_async_init 00:17:32.569 ************************************ 00:17:32.569 00:17:32.569 real 0m3.021s 00:17:32.569 user 0m2.663s 00:17:32.569 sys 0m0.776s 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.569 11:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.828 ************************************ 00:17:32.828 START TEST dma 00:17:32.828 ************************************ 00:17:32.828 11:37:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:32.828 * Looking for test storage... 00:17:32.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.828 --rc genhtml_branch_coverage=1 00:17:32.828 --rc genhtml_function_coverage=1 00:17:32.828 --rc genhtml_legend=1 00:17:32.828 --rc geninfo_all_blocks=1 00:17:32.828 --rc geninfo_unexecuted_blocks=1 00:17:32.828 00:17:32.828 ' 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.828 --rc genhtml_branch_coverage=1 00:17:32.828 --rc genhtml_function_coverage=1 00:17:32.828 --rc genhtml_legend=1 00:17:32.828 --rc geninfo_all_blocks=1 00:17:32.828 --rc geninfo_unexecuted_blocks=1 00:17:32.828 00:17:32.828 ' 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.828 --rc genhtml_branch_coverage=1 00:17:32.828 --rc genhtml_function_coverage=1 00:17:32.828 --rc genhtml_legend=1 00:17:32.828 --rc geninfo_all_blocks=1 00:17:32.828 --rc geninfo_unexecuted_blocks=1 00:17:32.828 00:17:32.828 ' 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.828 --rc genhtml_branch_coverage=1 00:17:32.828 --rc genhtml_function_coverage=1 00:17:32.828 --rc genhtml_legend=1 00:17:32.828 --rc geninfo_all_blocks=1 00:17:32.828 --rc geninfo_unexecuted_blocks=1 00:17:32.828 00:17:32.828 ' 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.828 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:17:32.829 00:17:32.829 real 0m0.213s 00:17:32.829 user 0m0.132s 00:17:32.829 sys 0m0.096s 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.829 11:37:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:32.829 ************************************ 00:17:32.829 END TEST dma 00:17:32.829 ************************************ 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.088 ************************************ 00:17:33.088 START TEST nvmf_identify 00:17:33.088 ************************************ 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:33.088 * Looking for test storage... 00:17:33.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.088 --rc genhtml_branch_coverage=1 00:17:33.088 --rc genhtml_function_coverage=1 00:17:33.088 --rc genhtml_legend=1 00:17:33.088 --rc geninfo_all_blocks=1 00:17:33.088 --rc geninfo_unexecuted_blocks=1 00:17:33.088 00:17:33.088 ' 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.088 --rc genhtml_branch_coverage=1 00:17:33.088 --rc genhtml_function_coverage=1 00:17:33.088 --rc genhtml_legend=1 00:17:33.088 --rc geninfo_all_blocks=1 00:17:33.088 --rc geninfo_unexecuted_blocks=1 00:17:33.088 00:17:33.088 ' 00:17:33.088 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.089 --rc genhtml_branch_coverage=1 00:17:33.089 --rc genhtml_function_coverage=1 00:17:33.089 --rc genhtml_legend=1 00:17:33.089 --rc geninfo_all_blocks=1 00:17:33.089 --rc geninfo_unexecuted_blocks=1 00:17:33.089 00:17:33.089 ' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.089 --rc genhtml_branch_coverage=1 00:17:33.089 --rc genhtml_function_coverage=1 00:17:33.089 --rc genhtml_legend=1 00:17:33.089 --rc geninfo_all_blocks=1 00:17:33.089 --rc geninfo_unexecuted_blocks=1 00:17:33.089 00:17:33.089 ' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:33.089 Cannot find device "nvmf_init_br" 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:33.089 Cannot find device "nvmf_init_br2" 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:33.089 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:33.348 Cannot find device "nvmf_tgt_br" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:33.348 Cannot find device "nvmf_tgt_br2" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:33.348 Cannot find device "nvmf_init_br" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:33.348 Cannot find device "nvmf_init_br2" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:33.348 Cannot find device "nvmf_tgt_br" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:33.348 Cannot find device "nvmf_tgt_br2" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:33.348 Cannot find device "nvmf_br" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:33.348 Cannot find device "nvmf_init_if" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:33.348 Cannot find device "nvmf_init_if2" 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:33.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:33.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:33.348 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:33.606 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:33.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:33.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:33.607 00:17:33.607 --- 10.0.0.3 ping statistics --- 00:17:33.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.607 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:33.607 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:33.607 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:33.607 00:17:33.607 --- 10.0.0.4 ping statistics --- 00:17:33.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.607 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:33.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:33.607 00:17:33.607 --- 10.0.0.1 ping statistics --- 00:17:33.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.607 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:33.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:17:33.607 00:17:33.607 --- 10.0.0.2 ping statistics --- 00:17:33.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.607 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88986 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88986 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 88986 ']' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.607 11:37:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.607 [2024-12-16 11:37:38.952190] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:33.607 [2024-12-16 11:37:38.952313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.865 [2024-12-16 11:37:39.107135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.865 [2024-12-16 11:37:39.165712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.865 [2024-12-16 11:37:39.165768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.865 [2024-12-16 11:37:39.165783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.865 [2024-12-16 11:37:39.165793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.865 [2024-12-16 11:37:39.165802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.865 [2024-12-16 11:37:39.167076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.865 [2024-12-16 11:37:39.167169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.865 [2024-12-16 11:37:39.167302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.865 [2024-12-16 11:37:39.167309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 [2024-12-16 11:37:39.310615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 Malloc0 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 [2024-12-16 11:37:39.418133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.124 [ 00:17:34.124 { 00:17:34.124 "allow_any_host": true, 00:17:34.124 "hosts": [], 00:17:34.124 "listen_addresses": [ 00:17:34.124 { 00:17:34.124 "adrfam": "IPv4", 00:17:34.124 "traddr": "10.0.0.3", 00:17:34.124 "trsvcid": "4420", 00:17:34.124 "trtype": "TCP" 00:17:34.124 } 00:17:34.124 ], 00:17:34.124 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:34.124 "subtype": "Discovery" 00:17:34.124 }, 00:17:34.124 { 00:17:34.124 "allow_any_host": true, 00:17:34.124 "hosts": [], 00:17:34.124 "listen_addresses": [ 00:17:34.124 { 00:17:34.124 "adrfam": "IPv4", 00:17:34.124 "traddr": "10.0.0.3", 00:17:34.124 "trsvcid": "4420", 00:17:34.124 "trtype": "TCP" 00:17:34.124 } 00:17:34.124 ], 00:17:34.124 "max_cntlid": 65519, 00:17:34.124 "max_namespaces": 32, 00:17:34.124 "min_cntlid": 1, 00:17:34.124 "model_number": "SPDK bdev Controller", 00:17:34.124 "namespaces": [ 00:17:34.124 { 00:17:34.124 "bdev_name": "Malloc0", 00:17:34.124 "eui64": "ABCDEF0123456789", 00:17:34.124 "name": "Malloc0", 00:17:34.124 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:34.124 "nsid": 1, 00:17:34.124 "uuid": "3fb85ba5-d902-48cc-81ae-08ec33ff2cad" 00:17:34.124 } 00:17:34.124 ], 00:17:34.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.124 "serial_number": "SPDK00000000000001", 00:17:34.124 "subtype": "NVMe" 00:17:34.124 } 00:17:34.124 ] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.124 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:34.124 [2024-12-16 11:37:39.477406] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:34.125 [2024-12-16 11:37:39.477485] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89030 ] 00:17:34.387 [2024-12-16 11:37:39.645846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:34.387 [2024-12-16 11:37:39.645914] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:34.387 [2024-12-16 11:37:39.645922] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:34.387 [2024-12-16 11:37:39.645934] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:34.387 [2024-12-16 11:37:39.645945] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:34.387 [2024-12-16 11:37:39.649332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:34.387 [2024-12-16 11:37:39.649400] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e51d90 0 00:17:34.387 [2024-12-16 11:37:39.658032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:34.387 [2024-12-16 11:37:39.658055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:34.387 [2024-12-16 11:37:39.658061] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:34.387 [2024-12-16 11:37:39.658064] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:34.387 [2024-12-16 11:37:39.658102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.658112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.658116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.387 [2024-12-16 11:37:39.658130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:34.387 [2024-12-16 11:37:39.658164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.387 [2024-12-16 11:37:39.666032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.387 [2024-12-16 11:37:39.666057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.387 [2024-12-16 11:37:39.666062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.387 [2024-12-16 11:37:39.666082] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:34.387 [2024-12-16 11:37:39.666092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:34.387 [2024-12-16 11:37:39.666100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:34.387 [2024-12-16 11:37:39.666121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.387 [2024-12-16 11:37:39.666140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.387 [2024-12-16 11:37:39.666168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.387 [2024-12-16 11:37:39.666244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.387 [2024-12-16 11:37:39.666252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.387 [2024-12-16 11:37:39.666256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.387 [2024-12-16 11:37:39.666271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:34.387 [2024-12-16 11:37:39.666280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:34.387 [2024-12-16 11:37:39.666288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.387 [2024-12-16 11:37:39.666304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.387 [2024-12-16 11:37:39.666325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.387 [2024-12-16 11:37:39.666383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.387 [2024-12-16 11:37:39.666390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.387 [2024-12-16 11:37:39.666394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.387 [2024-12-16 11:37:39.666404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:34.387 [2024-12-16 11:37:39.666413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:34.387 [2024-12-16 11:37:39.666420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.387 [2024-12-16 11:37:39.666436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.387 [2024-12-16 11:37:39.666455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.387 [2024-12-16 11:37:39.666508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.387 [2024-12-16 11:37:39.666515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.387 [2024-12-16 11:37:39.666519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.387 [2024-12-16 11:37:39.666539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:34.387 [2024-12-16 11:37:39.666551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.387 [2024-12-16 11:37:39.666568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.387 [2024-12-16 11:37:39.666587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.387 [2024-12-16 11:37:39.666645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.387 [2024-12-16 11:37:39.666652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.387 [2024-12-16 11:37:39.666656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.387 [2024-12-16 11:37:39.666666] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:34.387 [2024-12-16 11:37:39.666671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:34.387 [2024-12-16 11:37:39.666680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:34.387 [2024-12-16 11:37:39.666791] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:34.387 [2024-12-16 11:37:39.666797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:34.387 [2024-12-16 11:37:39.666806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.387 [2024-12-16 11:37:39.666822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.387 [2024-12-16 11:37:39.666842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.387 [2024-12-16 11:37:39.666905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.387 [2024-12-16 11:37:39.666912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.387 [2024-12-16 11:37:39.666916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.387 [2024-12-16 11:37:39.666926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:34.387 [2024-12-16 11:37:39.666936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.387 [2024-12-16 11:37:39.666945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.387 [2024-12-16 11:37:39.666953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.388 [2024-12-16 11:37:39.666971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.388 [2024-12-16 11:37:39.667039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.388 [2024-12-16 11:37:39.667048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.388 [2024-12-16 11:37:39.667052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.388 [2024-12-16 11:37:39.667061] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:34.388 [2024-12-16 11:37:39.667067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:34.388 [2024-12-16 11:37:39.667076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:34.388 [2024-12-16 11:37:39.667087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:34.388 [2024-12-16 11:37:39.667098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.388 [2024-12-16 11:37:39.667133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.388 [2024-12-16 11:37:39.667236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.388 [2024-12-16 11:37:39.667243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.388 [2024-12-16 11:37:39.667248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667252] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e51d90): datao=0, datal=4096, cccid=0 00:17:34.388 [2024-12-16 11:37:39.667257] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92600) on tqpair(0x1e51d90): expected_datao=0, payload_size=4096 00:17:34.388 [2024-12-16 11:37:39.667262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667276] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.388 [2024-12-16 11:37:39.667291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.388 [2024-12-16 11:37:39.667295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.388 [2024-12-16 11:37:39.667308] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:34.388 [2024-12-16 11:37:39.667314] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:34.388 [2024-12-16 11:37:39.667319] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:34.388 [2024-12-16 11:37:39.667325] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:34.388 [2024-12-16 11:37:39.667330] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:34.388 [2024-12-16 11:37:39.667335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:34.388 [2024-12-16 11:37:39.667344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:34.388 [2024-12-16 11:37:39.667352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.388 [2024-12-16 11:37:39.667390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.388 [2024-12-16 11:37:39.667458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.388 [2024-12-16 11:37:39.667465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.388 [2024-12-16 11:37:39.667469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.388 [2024-12-16 11:37:39.667481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.388 [2024-12-16 11:37:39.667503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.388 [2024-12-16 11:37:39.667524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.388 [2024-12-16 11:37:39.667544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.388 [2024-12-16 11:37:39.667564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:34.388 [2024-12-16 11:37:39.667578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:34.388 [2024-12-16 11:37:39.667586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.388 [2024-12-16 11:37:39.667619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92600, cid 0, qid 0 00:17:34.388 [2024-12-16 11:37:39.667627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92780, cid 1, qid 0 00:17:34.388 [2024-12-16 11:37:39.667632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92900, cid 2, qid 0 00:17:34.388 [2024-12-16 11:37:39.667637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.388 [2024-12-16 11:37:39.667642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92c00, cid 4, qid 0 00:17:34.388 [2024-12-16 11:37:39.667735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.388 [2024-12-16 11:37:39.667742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.388 [2024-12-16 11:37:39.667745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92c00) on tqpair=0x1e51d90 00:17:34.388 [2024-12-16 11:37:39.667755] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:34.388 [2024-12-16 11:37:39.667761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:34.388 [2024-12-16 11:37:39.667773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.667786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.388 [2024-12-16 11:37:39.667806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92c00, cid 4, qid 0 00:17:34.388 [2024-12-16 11:37:39.667866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.388 [2024-12-16 11:37:39.667874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.388 [2024-12-16 11:37:39.667877] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667881] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e51d90): datao=0, datal=4096, cccid=4 00:17:34.388 [2024-12-16 11:37:39.667886] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92c00) on tqpair(0x1e51d90): expected_datao=0, payload_size=4096 00:17:34.388 [2024-12-16 11:37:39.667891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667898] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667902] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.388 [2024-12-16 11:37:39.667917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.388 [2024-12-16 11:37:39.667921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.667925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92c00) on tqpair=0x1e51d90 00:17:34.388 [2024-12-16 11:37:39.667938] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:34.388 [2024-12-16 11:37:39.667992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.668003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.668025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.388 [2024-12-16 11:37:39.668036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.668041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.388 [2024-12-16 11:37:39.668045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e51d90) 00:17:34.388 [2024-12-16 11:37:39.668052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.388 [2024-12-16 11:37:39.668085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92c00, cid 4, qid 0 00:17:34.388 [2024-12-16 11:37:39.668093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92d80, cid 5, qid 0 00:17:34.388 [2024-12-16 11:37:39.668237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.388 [2024-12-16 11:37:39.668245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.388 [2024-12-16 11:37:39.668249] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.668253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e51d90): datao=0, datal=1024, cccid=4 00:17:34.389 [2024-12-16 11:37:39.668258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92c00) on tqpair(0x1e51d90): expected_datao=0, payload_size=1024 00:17:34.389 [2024-12-16 11:37:39.668263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.668269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.668274] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.668280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.389 [2024-12-16 11:37:39.668286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.389 [2024-12-16 11:37:39.668290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.668294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92d80) on tqpair=0x1e51d90 00:17:34.389 [2024-12-16 11:37:39.709116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.389 [2024-12-16 11:37:39.709140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.389 [2024-12-16 11:37:39.709145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92c00) on tqpair=0x1e51d90 00:17:34.389 [2024-12-16 11:37:39.709165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e51d90) 00:17:34.389 [2024-12-16 11:37:39.709180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.389 [2024-12-16 11:37:39.709212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92c00, cid 4, qid 0 00:17:34.389 [2024-12-16 11:37:39.709289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.389 [2024-12-16 11:37:39.709296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.389 [2024-12-16 11:37:39.709300] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709304] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e51d90): datao=0, datal=3072, cccid=4 00:17:34.389 [2024-12-16 11:37:39.709309] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92c00) on tqpair(0x1e51d90): expected_datao=0, payload_size=3072 00:17:34.389 [2024-12-16 11:37:39.709313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709337] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709341] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.389 [2024-12-16 11:37:39.709371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.389 [2024-12-16 11:37:39.709375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92c00) on tqpair=0x1e51d90 00:17:34.389 [2024-12-16 11:37:39.709390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e51d90) 00:17:34.389 [2024-12-16 11:37:39.709404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.389 [2024-12-16 11:37:39.709431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92c00, cid 4, qid 0 00:17:34.389 [2024-12-16 11:37:39.709501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.389 [2024-12-16 11:37:39.709508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.389 [2024-12-16 11:37:39.709512] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709516] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e51d90): datao=0, datal=8, cccid=4 00:17:34.389 [2024-12-16 11:37:39.709521] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92c00) on tqpair(0x1e51d90): expected_datao=0, payload_size=8 00:17:34.389 [2024-12-16 11:37:39.709525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709532] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.709536] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.389 ===================================================== 00:17:34.389 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:34.389 ===================================================== 00:17:34.389 Controller Capabilities/Features 00:17:34.389 ================================ 00:17:34.389 Vendor ID: 0000 00:17:34.389 Subsystem Vendor ID: 0000 00:17:34.389 Serial Number: .................... 00:17:34.389 Model Number: ........................................ 00:17:34.389 Firmware Version: 25.01 00:17:34.389 Recommended Arb Burst: 0 00:17:34.389 IEEE OUI Identifier: 00 00 00 00:17:34.389 Multi-path I/O 00:17:34.389 May have multiple subsystem ports: No 00:17:34.389 May have multiple controllers: No 00:17:34.389 Associated with SR-IOV VF: No 00:17:34.389 Max Data Transfer Size: 131072 00:17:34.389 Max Number of Namespaces: 0 00:17:34.389 Max Number of I/O Queues: 1024 00:17:34.389 NVMe Specification Version (VS): 1.3 00:17:34.389 NVMe Specification Version (Identify): 1.3 00:17:34.389 Maximum Queue Entries: 128 00:17:34.389 Contiguous Queues Required: Yes 00:17:34.389 Arbitration Mechanisms Supported 00:17:34.389 Weighted Round Robin: Not Supported 00:17:34.389 Vendor Specific: Not Supported 00:17:34.389 Reset Timeout: 15000 ms 00:17:34.389 Doorbell Stride: 4 bytes 00:17:34.389 NVM Subsystem Reset: Not Supported 00:17:34.389 Command Sets Supported 00:17:34.389 NVM Command Set: Supported 00:17:34.389 Boot Partition: Not Supported 00:17:34.389 Memory Page Size Minimum: 4096 bytes 00:17:34.389 Memory Page Size Maximum: 4096 bytes 00:17:34.389 Persistent Memory Region: Not Supported 00:17:34.389 Optional Asynchronous Events Supported 00:17:34.389 Namespace Attribute Notices: Not Supported 00:17:34.389 Firmware Activation Notices: Not Supported 00:17:34.389 ANA Change Notices: Not Supported 00:17:34.389 PLE Aggregate Log Change Notices: Not Supported 00:17:34.389 LBA Status Info Alert Notices: Not Supported 00:17:34.389 EGE Aggregate Log Change Notices: Not Supported 00:17:34.389 Normal NVM Subsystem Shutdown event: Not Supported 00:17:34.389 Zone Descriptor Change Notices: Not Supported 00:17:34.389 Discovery Log Change Notices: Supported 00:17:34.389 Controller Attributes 00:17:34.389 128-bit Host Identifier: Not Supported 00:17:34.389 Non-Operational Permissive Mode: Not Supported 00:17:34.389 NVM Sets: Not Supported 00:17:34.389 Read Recovery Levels: Not Supported 00:17:34.389 Endurance Groups: Not Supported 00:17:34.389 Predictable Latency Mode: Not Supported 00:17:34.389 Traffic Based Keep ALive: Not Supported 00:17:34.389 Namespace Granularity: Not Supported 00:17:34.389 SQ Associations: Not Supported 00:17:34.389 UUID List: Not Supported 00:17:34.389 Multi-Domain Subsystem: Not Supported 00:17:34.389 Fixed Capacity Management: Not Supported 00:17:34.389 Variable Capacity Management: Not Supported 00:17:34.389 Delete Endurance Group: Not Supported 00:17:34.389 Delete NVM Set: Not Supported 00:17:34.389 Extended LBA Formats Supported: Not Supported 00:17:34.389 Flexible Data Placement Supported: Not Supported 00:17:34.389 00:17:34.389 Controller Memory Buffer Support 00:17:34.389 ================================ 00:17:34.389 Supported: No 00:17:34.389 00:17:34.389 Persistent Memory Region Support 00:17:34.389 ================================ 00:17:34.389 Supported: No 00:17:34.389 00:17:34.389 Admin Command Set Attributes 00:17:34.389 ============================ 00:17:34.389 Security Send/Receive: Not Supported 00:17:34.389 Format NVM: Not Supported 00:17:34.389 Firmware Activate/Download: Not Supported 00:17:34.389 Namespace Management: Not Supported 00:17:34.389 Device Self-Test: Not Supported 00:17:34.389 Directives: Not Supported 00:17:34.389 NVMe-MI: Not Supported 00:17:34.389 Virtualization Management: Not Supported 00:17:34.389 Doorbell Buffer Config: Not Supported 00:17:34.389 Get LBA Status Capability: Not Supported 00:17:34.389 Command & Feature Lockdown Capability: Not Supported 00:17:34.389 Abort Command Limit: 1 00:17:34.389 Async Event Request Limit: 4 00:17:34.389 Number of Firmware Slots: N/A 00:17:34.389 Firmware Slot 1 Read-Only: N/A 00:17:34.389 Firm[2024-12-16 11:37:39.752097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.389 [2024-12-16 11:37:39.752119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.389 [2024-12-16 11:37:39.752140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.389 [2024-12-16 11:37:39.752145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92c00) on tqpair=0x1e51d90 00:17:34.389 ware Activation Without Reset: N/A 00:17:34.389 Multiple Update Detection Support: N/A 00:17:34.389 Firmware Update Granularity: No Information Provided 00:17:34.389 Per-Namespace SMART Log: No 00:17:34.389 Asymmetric Namespace Access Log Page: Not Supported 00:17:34.389 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:34.389 Command Effects Log Page: Not Supported 00:17:34.389 Get Log Page Extended Data: Supported 00:17:34.389 Telemetry Log Pages: Not Supported 00:17:34.389 Persistent Event Log Pages: Not Supported 00:17:34.389 Supported Log Pages Log Page: May Support 00:17:34.389 Commands Supported & Effects Log Page: Not Supported 00:17:34.389 Feature Identifiers & Effects Log Page:May Support 00:17:34.389 NVMe-MI Commands & Effects Log Page: May Support 00:17:34.389 Data Area 4 for Telemetry Log: Not Supported 00:17:34.389 Error Log Page Entries Supported: 128 00:17:34.389 Keep Alive: Not Supported 00:17:34.389 00:17:34.389 NVM Command Set Attributes 00:17:34.389 ========================== 00:17:34.389 Submission Queue Entry Size 00:17:34.389 Max: 1 00:17:34.389 Min: 1 00:17:34.389 Completion Queue Entry Size 00:17:34.389 Max: 1 00:17:34.389 Min: 1 00:17:34.389 Number of Namespaces: 0 00:17:34.389 Compare Command: Not Supported 00:17:34.389 Write Uncorrectable Command: Not Supported 00:17:34.390 Dataset Management Command: Not Supported 00:17:34.390 Write Zeroes Command: Not Supported 00:17:34.390 Set Features Save Field: Not Supported 00:17:34.390 Reservations: Not Supported 00:17:34.390 Timestamp: Not Supported 00:17:34.390 Copy: Not Supported 00:17:34.390 Volatile Write Cache: Not Present 00:17:34.390 Atomic Write Unit (Normal): 1 00:17:34.390 Atomic Write Unit (PFail): 1 00:17:34.390 Atomic Compare & Write Unit: 1 00:17:34.390 Fused Compare & Write: Supported 00:17:34.390 Scatter-Gather List 00:17:34.390 SGL Command Set: Supported 00:17:34.390 SGL Keyed: Supported 00:17:34.390 SGL Bit Bucket Descriptor: Not Supported 00:17:34.390 SGL Metadata Pointer: Not Supported 00:17:34.390 Oversized SGL: Not Supported 00:17:34.390 SGL Metadata Address: Not Supported 00:17:34.390 SGL Offset: Supported 00:17:34.390 Transport SGL Data Block: Not Supported 00:17:34.390 Replay Protected Memory Block: Not Supported 00:17:34.390 00:17:34.390 Firmware Slot Information 00:17:34.390 ========================= 00:17:34.390 Active slot: 0 00:17:34.390 00:17:34.390 00:17:34.390 Error Log 00:17:34.390 ========= 00:17:34.390 00:17:34.390 Active Namespaces 00:17:34.390 ================= 00:17:34.390 Discovery Log Page 00:17:34.390 ================== 00:17:34.390 Generation Counter: 2 00:17:34.390 Number of Records: 2 00:17:34.390 Record Format: 0 00:17:34.390 00:17:34.390 Discovery Log Entry 0 00:17:34.390 ---------------------- 00:17:34.390 Transport Type: 3 (TCP) 00:17:34.390 Address Family: 1 (IPv4) 00:17:34.390 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:34.390 Entry Flags: 00:17:34.390 Duplicate Returned Information: 1 00:17:34.390 Explicit Persistent Connection Support for Discovery: 1 00:17:34.390 Transport Requirements: 00:17:34.390 Secure Channel: Not Required 00:17:34.390 Port ID: 0 (0x0000) 00:17:34.390 Controller ID: 65535 (0xffff) 00:17:34.390 Admin Max SQ Size: 128 00:17:34.390 Transport Service Identifier: 4420 00:17:34.390 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:34.390 Transport Address: 10.0.0.3 00:17:34.390 Discovery Log Entry 1 00:17:34.390 ---------------------- 00:17:34.390 Transport Type: 3 (TCP) 00:17:34.390 Address Family: 1 (IPv4) 00:17:34.390 Subsystem Type: 2 (NVM Subsystem) 00:17:34.390 Entry Flags: 00:17:34.390 Duplicate Returned Information: 0 00:17:34.390 Explicit Persistent Connection Support for Discovery: 0 00:17:34.390 Transport Requirements: 00:17:34.390 Secure Channel: Not Required 00:17:34.390 Port ID: 0 (0x0000) 00:17:34.390 Controller ID: 65535 (0xffff) 00:17:34.390 Admin Max SQ Size: 128 00:17:34.390 Transport Service Identifier: 4420 00:17:34.390 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:34.390 Transport Address: 10.0.0.3 [2024-12-16 11:37:39.752250] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:34.390 [2024-12-16 11:37:39.752266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92600) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.390 [2024-12-16 11:37:39.752279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92780) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.390 [2024-12-16 11:37:39.752290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92900) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.390 [2024-12-16 11:37:39.752299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.390 [2024-12-16 11:37:39.752317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.390 [2024-12-16 11:37:39.752335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.390 [2024-12-16 11:37:39.752362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.390 [2024-12-16 11:37:39.752435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.390 [2024-12-16 11:37:39.752442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.390 [2024-12-16 11:37:39.752445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.390 [2024-12-16 11:37:39.752474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.390 [2024-12-16 11:37:39.752498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.390 [2024-12-16 11:37:39.752570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.390 [2024-12-16 11:37:39.752577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.390 [2024-12-16 11:37:39.752581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752591] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:34.390 [2024-12-16 11:37:39.752596] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:34.390 [2024-12-16 11:37:39.752606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.390 [2024-12-16 11:37:39.752623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.390 [2024-12-16 11:37:39.752642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.390 [2024-12-16 11:37:39.752698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.390 [2024-12-16 11:37:39.752707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.390 [2024-12-16 11:37:39.752711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.390 [2024-12-16 11:37:39.752743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.390 [2024-12-16 11:37:39.752762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.390 [2024-12-16 11:37:39.752816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.390 [2024-12-16 11:37:39.752823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.390 [2024-12-16 11:37:39.752827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.390 [2024-12-16 11:37:39.752858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.390 [2024-12-16 11:37:39.752876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.390 [2024-12-16 11:37:39.752930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.390 [2024-12-16 11:37:39.752937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.390 [2024-12-16 11:37:39.752941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.752955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.752964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.390 [2024-12-16 11:37:39.752972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.390 [2024-12-16 11:37:39.752990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.390 [2024-12-16 11:37:39.753040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.390 [2024-12-16 11:37:39.753049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.390 [2024-12-16 11:37:39.753053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.753058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.390 [2024-12-16 11:37:39.753069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.753074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.390 [2024-12-16 11:37:39.753078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.753891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.753947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.753956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.753959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.753975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.753984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.753991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.754020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.754076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.754083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.754087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.754091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.754102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.754107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.754111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.754119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.754139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.391 [2024-12-16 11:37:39.754190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.391 [2024-12-16 11:37:39.754197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.391 [2024-12-16 11:37:39.754201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.754205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.391 [2024-12-16 11:37:39.754215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.754220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.391 [2024-12-16 11:37:39.754224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.391 [2024-12-16 11:37:39.754232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.391 [2024-12-16 11:37:39.754249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.754300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.754307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.754311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.754326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.754343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.754361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.754412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.754418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.754422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.754437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.754453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.754471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.754522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.754540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.754545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.754561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.754577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.754597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.754653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.754660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.754664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.754679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.754696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.754714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.754770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.754777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.754781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.754795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.754812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.754830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.754888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.754895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.754898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.754913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.754922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.754930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.754948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.755001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.755019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.755024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.755040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.755057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.755077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.755132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.755139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.755142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.755157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.755174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.755192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.755246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.755253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.755257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.755272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.755289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.755307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.755358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.755365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.755369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.755384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.755400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.755419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.755475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.755482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.755486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.755501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.755517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.755535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.755586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.755593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.755597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.392 [2024-12-16 11:37:39.755612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.392 [2024-12-16 11:37:39.755621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.392 [2024-12-16 11:37:39.755629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.392 [2024-12-16 11:37:39.755647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.392 [2024-12-16 11:37:39.755700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.392 [2024-12-16 11:37:39.755707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.392 [2024-12-16 11:37:39.755711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.393 [2024-12-16 11:37:39.755726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.393 [2024-12-16 11:37:39.755742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.393 [2024-12-16 11:37:39.755760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.393 [2024-12-16 11:37:39.755814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.393 [2024-12-16 11:37:39.755821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.393 [2024-12-16 11:37:39.755825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.393 [2024-12-16 11:37:39.755839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.393 [2024-12-16 11:37:39.755856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.393 [2024-12-16 11:37:39.755874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.393 [2024-12-16 11:37:39.755926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.393 [2024-12-16 11:37:39.755933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.393 [2024-12-16 11:37:39.755937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.393 [2024-12-16 11:37:39.755952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.755961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.393 [2024-12-16 11:37:39.755968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.393 [2024-12-16 11:37:39.755986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.393 [2024-12-16 11:37:39.759054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.393 [2024-12-16 11:37:39.759075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.393 [2024-12-16 11:37:39.759095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.759100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.393 [2024-12-16 11:37:39.759115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.759120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.759124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e51d90) 00:17:34.393 [2024-12-16 11:37:39.759133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.393 [2024-12-16 11:37:39.759160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a80, cid 3, qid 0 00:17:34.393 [2024-12-16 11:37:39.759214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.393 [2024-12-16 11:37:39.759222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.393 [2024-12-16 11:37:39.759225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.393 [2024-12-16 11:37:39.759230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a80) on tqpair=0x1e51d90 00:17:34.393 [2024-12-16 11:37:39.759238] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:17:34.393 00:17:34.393 11:37:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:34.655 [2024-12-16 11:37:39.797497] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:34.655 [2024-12-16 11:37:39.797557] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89033 ] 00:17:34.655 [2024-12-16 11:37:39.955118] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:34.655 [2024-12-16 11:37:39.955193] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:34.655 [2024-12-16 11:37:39.955200] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:34.655 [2024-12-16 11:37:39.955210] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:34.655 [2024-12-16 11:37:39.955218] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:34.655 [2024-12-16 11:37:39.955476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:34.655 [2024-12-16 11:37:39.955534] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10bed90 0 00:17:34.655 [2024-12-16 11:37:39.961050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:34.655 [2024-12-16 11:37:39.961076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:34.655 [2024-12-16 11:37:39.961098] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:34.655 [2024-12-16 11:37:39.961102] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:34.655 [2024-12-16 11:37:39.961142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.655 [2024-12-16 11:37:39.961150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.655 [2024-12-16 11:37:39.961154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.961166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:34.656 [2024-12-16 11:37:39.961199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.972259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.972281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.972287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.972307] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:34.656 [2024-12-16 11:37:39.972316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:34.656 [2024-12-16 11:37:39.972324] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:34.656 [2024-12-16 11:37:39.972345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.972364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.656 [2024-12-16 11:37:39.972396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.972476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.972483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.972487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.972503] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:34.656 [2024-12-16 11:37:39.972512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:34.656 [2024-12-16 11:37:39.972520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.972537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.656 [2024-12-16 11:37:39.972559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.972617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.972624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.972628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.972639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:34.656 [2024-12-16 11:37:39.972657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:34.656 [2024-12-16 11:37:39.972665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.972682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.656 [2024-12-16 11:37:39.972702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.972761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.972770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.972774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.972785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:34.656 [2024-12-16 11:37:39.972801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.972818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.656 [2024-12-16 11:37:39.972837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.972892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.972899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.972903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.972907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.972912] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:34.656 [2024-12-16 11:37:39.972918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:34.656 [2024-12-16 11:37:39.972926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:34.656 [2024-12-16 11:37:39.973037] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:34.656 [2024-12-16 11:37:39.973045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:34.656 [2024-12-16 11:37:39.973055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.973071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.656 [2024-12-16 11:37:39.973095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.973153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.973160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.973164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.973174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:34.656 [2024-12-16 11:37:39.973185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.973201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.656 [2024-12-16 11:37:39.973220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.973271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.973278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.973282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.973291] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:34.656 [2024-12-16 11:37:39.973297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:34.656 [2024-12-16 11:37:39.973305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:34.656 [2024-12-16 11:37:39.973316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:34.656 [2024-12-16 11:37:39.973327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.656 [2024-12-16 11:37:39.973339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.656 [2024-12-16 11:37:39.973359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.656 [2024-12-16 11:37:39.973457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.656 [2024-12-16 11:37:39.973464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.656 [2024-12-16 11:37:39.973468] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973472] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=4096, cccid=0 00:17:34.656 [2024-12-16 11:37:39.973477] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ff600) on tqpair(0x10bed90): expected_datao=0, payload_size=4096 00:17:34.656 [2024-12-16 11:37:39.973482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973490] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973494] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.656 [2024-12-16 11:37:39.973509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.656 [2024-12-16 11:37:39.973513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.656 [2024-12-16 11:37:39.973517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.656 [2024-12-16 11:37:39.973526] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:34.656 [2024-12-16 11:37:39.973531] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:34.656 [2024-12-16 11:37:39.973536] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:34.656 [2024-12-16 11:37:39.973541] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:34.656 [2024-12-16 11:37:39.973546] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:34.657 [2024-12-16 11:37:39.973551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.973560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.973568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.973584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.657 [2024-12-16 11:37:39.973605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.657 [2024-12-16 11:37:39.973663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.657 [2024-12-16 11:37:39.973670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.657 [2024-12-16 11:37:39.973674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.657 [2024-12-16 11:37:39.973685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.973700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.657 [2024-12-16 11:37:39.973707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.973721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.657 [2024-12-16 11:37:39.973728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.973741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.657 [2024-12-16 11:37:39.973748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.973762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.657 [2024-12-16 11:37:39.973767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.973781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.973789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.973801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.657 [2024-12-16 11:37:39.973822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff600, cid 0, qid 0 00:17:34.657 [2024-12-16 11:37:39.973830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff780, cid 1, qid 0 00:17:34.657 [2024-12-16 11:37:39.973835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ff900, cid 2, qid 0 00:17:34.657 [2024-12-16 11:37:39.973840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.657 [2024-12-16 11:37:39.973844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffc00, cid 4, qid 0 00:17:34.657 [2024-12-16 11:37:39.973936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.657 [2024-12-16 11:37:39.973943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.657 [2024-12-16 11:37:39.973947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffc00) on tqpair=0x10bed90 00:17:34.657 [2024-12-16 11:37:39.973957] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:34.657 [2024-12-16 11:37:39.973963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.973976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.973983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.973990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.973998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.974019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.657 [2024-12-16 11:37:39.974044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffc00, cid 4, qid 0 00:17:34.657 [2024-12-16 11:37:39.974100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.657 [2024-12-16 11:37:39.974107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.657 [2024-12-16 11:37:39.974111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffc00) on tqpair=0x10bed90 00:17:34.657 [2024-12-16 11:37:39.974178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.974190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.974199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.974211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.657 [2024-12-16 11:37:39.974232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffc00, cid 4, qid 0 00:17:34.657 [2024-12-16 11:37:39.974299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.657 [2024-12-16 11:37:39.974306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.657 [2024-12-16 11:37:39.974310] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974314] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=4096, cccid=4 00:17:34.657 [2024-12-16 11:37:39.974319] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ffc00) on tqpair(0x10bed90): expected_datao=0, payload_size=4096 00:17:34.657 [2024-12-16 11:37:39.974324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974331] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974335] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.657 [2024-12-16 11:37:39.974350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.657 [2024-12-16 11:37:39.974354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffc00) on tqpair=0x10bed90 00:17:34.657 [2024-12-16 11:37:39.974378] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:34.657 [2024-12-16 11:37:39.974389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.974400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.974409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bed90) 00:17:34.657 [2024-12-16 11:37:39.974421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.657 [2024-12-16 11:37:39.974442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffc00, cid 4, qid 0 00:17:34.657 [2024-12-16 11:37:39.974545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.657 [2024-12-16 11:37:39.974558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.657 [2024-12-16 11:37:39.974563] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974567] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=4096, cccid=4 00:17:34.657 [2024-12-16 11:37:39.974572] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ffc00) on tqpair(0x10bed90): expected_datao=0, payload_size=4096 00:17:34.657 [2024-12-16 11:37:39.974577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974584] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974588] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.657 [2024-12-16 11:37:39.974603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.657 [2024-12-16 11:37:39.974607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffc00) on tqpair=0x10bed90 00:17:34.657 [2024-12-16 11:37:39.974628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.974640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:34.657 [2024-12-16 11:37:39.974649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.657 [2024-12-16 11:37:39.974654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.974662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.974685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffc00, cid 4, qid 0 00:17:34.658 [2024-12-16 11:37:39.974755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.658 [2024-12-16 11:37:39.974762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.658 [2024-12-16 11:37:39.974766] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974770] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=4096, cccid=4 00:17:34.658 [2024-12-16 11:37:39.974775] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ffc00) on tqpair(0x10bed90): expected_datao=0, payload_size=4096 00:17:34.658 [2024-12-16 11:37:39.974779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974787] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.658 [2024-12-16 11:37:39.974805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.658 [2024-12-16 11:37:39.974809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffc00) on tqpair=0x10bed90 00:17:34.658 [2024-12-16 11:37:39.974823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:34.658 [2024-12-16 11:37:39.974832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:34.658 [2024-12-16 11:37:39.974843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:34.658 [2024-12-16 11:37:39.974855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:34.658 [2024-12-16 11:37:39.974862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:34.658 [2024-12-16 11:37:39.974868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:34.658 [2024-12-16 11:37:39.974874] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:34.658 [2024-12-16 11:37:39.974879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:34.658 [2024-12-16 11:37:39.974885] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:34.658 [2024-12-16 11:37:39.974901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.974914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.974922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.974930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.974940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.658 [2024-12-16 11:37:39.974966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffc00, cid 4, qid 0 00:17:34.658 [2024-12-16 11:37:39.974974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffd80, cid 5, qid 0 00:17:34.658 [2024-12-16 11:37:39.975079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.658 [2024-12-16 11:37:39.975088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.658 [2024-12-16 11:37:39.975092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffc00) on tqpair=0x10bed90 00:17:34.658 [2024-12-16 11:37:39.975103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.658 [2024-12-16 11:37:39.975109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.658 [2024-12-16 11:37:39.975113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffd80) on tqpair=0x10bed90 00:17:34.658 [2024-12-16 11:37:39.975128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.975140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.975162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffd80, cid 5, qid 0 00:17:34.658 [2024-12-16 11:37:39.975225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.658 [2024-12-16 11:37:39.975232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.658 [2024-12-16 11:37:39.975236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffd80) on tqpair=0x10bed90 00:17:34.658 [2024-12-16 11:37:39.975251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.975263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.975281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffd80, cid 5, qid 0 00:17:34.658 [2024-12-16 11:37:39.975336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.658 [2024-12-16 11:37:39.975343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.658 [2024-12-16 11:37:39.975346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffd80) on tqpair=0x10bed90 00:17:34.658 [2024-12-16 11:37:39.975361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.975373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.975391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffd80, cid 5, qid 0 00:17:34.658 [2024-12-16 11:37:39.975445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.658 [2024-12-16 11:37:39.975452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.658 [2024-12-16 11:37:39.975456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffd80) on tqpair=0x10bed90 00:17:34.658 [2024-12-16 11:37:39.975479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.975492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.975500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.975511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.975519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.975530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.975538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10bed90) 00:17:34.658 [2024-12-16 11:37:39.975549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.658 [2024-12-16 11:37:39.975570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffd80, cid 5, qid 0 00:17:34.658 [2024-12-16 11:37:39.975577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffc00, cid 4, qid 0 00:17:34.658 [2024-12-16 11:37:39.975582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10fff00, cid 6, qid 0 00:17:34.658 [2024-12-16 11:37:39.975587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1100080, cid 7, qid 0 00:17:34.658 [2024-12-16 11:37:39.975730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.658 [2024-12-16 11:37:39.975751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.658 [2024-12-16 11:37:39.975757] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975761] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=8192, cccid=5 00:17:34.658 [2024-12-16 11:37:39.975766] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ffd80) on tqpair(0x10bed90): expected_datao=0, payload_size=8192 00:17:34.658 [2024-12-16 11:37:39.975770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975793] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.658 [2024-12-16 11:37:39.975806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.658 [2024-12-16 11:37:39.975809] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975813] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=512, cccid=4 00:17:34.658 [2024-12-16 11:37:39.975818] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ffc00) on tqpair(0x10bed90): expected_datao=0, payload_size=512 00:17:34.658 [2024-12-16 11:37:39.975822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975829] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975833] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.658 [2024-12-16 11:37:39.975844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.658 [2024-12-16 11:37:39.975848] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.658 [2024-12-16 11:37:39.975852] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=512, cccid=6 00:17:34.658 [2024-12-16 11:37:39.975856] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10fff00) on tqpair(0x10bed90): expected_datao=0, payload_size=512 00:17:34.658 [2024-12-16 11:37:39.975860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975867] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975870] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:34.659 [2024-12-16 11:37:39.975882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:34.659 [2024-12-16 11:37:39.975886] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975889] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bed90): datao=0, datal=4096, cccid=7 00:17:34.659 [2024-12-16 11:37:39.975894] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1100080) on tqpair(0x10bed90): expected_datao=0, payload_size=4096 00:17:34.659 [2024-12-16 11:37:39.975898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975905] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975909] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.659 [2024-12-16 11:37:39.975920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.659 [2024-12-16 11:37:39.975924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffd80) on tqpair=0x10bed90 00:17:34.659 [2024-12-16 11:37:39.975945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.659 [2024-12-16 11:37:39.975951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.659 [2024-12-16 11:37:39.975955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffc00) on tqpair=0x10bed90 00:17:34.659 [2024-12-16 11:37:39.975972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.659 [2024-12-16 11:37:39.975978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.659 [2024-12-16 11:37:39.975982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.975986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10fff00) on tqpair=0x10bed90 00:17:34.659 [2024-12-16 11:37:39.975994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.659 [2024-12-16 11:37:39.976000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.659 ===================================================== 00:17:34.659 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.659 ===================================================== 00:17:34.659 Controller Capabilities/Features 00:17:34.659 ================================ 00:17:34.659 Vendor ID: 8086 00:17:34.659 Subsystem Vendor ID: 8086 00:17:34.659 Serial Number: SPDK00000000000001 00:17:34.659 Model Number: SPDK bdev Controller 00:17:34.659 Firmware Version: 25.01 00:17:34.659 Recommended Arb Burst: 6 00:17:34.659 IEEE OUI Identifier: e4 d2 5c 00:17:34.659 Multi-path I/O 00:17:34.659 May have multiple subsystem ports: Yes 00:17:34.659 May have multiple controllers: Yes 00:17:34.659 Associated with SR-IOV VF: No 00:17:34.659 Max Data Transfer Size: 131072 00:17:34.659 Max Number of Namespaces: 32 00:17:34.659 Max Number of I/O Queues: 127 00:17:34.659 NVMe Specification Version (VS): 1.3 00:17:34.659 NVMe Specification Version (Identify): 1.3 00:17:34.659 Maximum Queue Entries: 128 00:17:34.659 Contiguous Queues Required: Yes 00:17:34.659 Arbitration Mechanisms Supported 00:17:34.659 Weighted Round Robin: Not Supported 00:17:34.659 Vendor Specific: Not Supported 00:17:34.659 Reset Timeout: 15000 ms 00:17:34.659 Doorbell Stride: 4 bytes 00:17:34.659 NVM Subsystem Reset: Not Supported 00:17:34.659 Command Sets Supported 00:17:34.659 NVM Command Set: Supported 00:17:34.659 Boot Partition: Not Supported 00:17:34.659 Memory Page Size Minimum: 4096 bytes 00:17:34.659 Memory Page Size Maximum: 4096 bytes 00:17:34.659 Persistent Memory Region: Not Supported 00:17:34.659 Optional Asynchronous Events Supported 00:17:34.659 Namespace Attribute Notices: Supported 00:17:34.659 Firmware Activation Notices: Not Supported 00:17:34.659 ANA Change Notices: Not Supported 00:17:34.659 PLE Aggregate Log Change Notices: Not Supported 00:17:34.659 LBA Status Info Alert Notices: Not Supported 00:17:34.659 EGE Aggregate Log Change Notices: Not Supported 00:17:34.659 Normal NVM Subsystem Shutdown event: Not Supported 00:17:34.659 Zone Descriptor Change Notices: Not Supported 00:17:34.659 Discovery Log Change Notices: Not Supported 00:17:34.659 Controller Attributes 00:17:34.659 128-bit Host Identifier: Supported 00:17:34.659 Non-Operational Permissive Mode: Not Supported 00:17:34.659 NVM Sets: Not Supported 00:17:34.659 Read Recovery Levels: Not Supported 00:17:34.659 Endurance Groups: Not Supported 00:17:34.659 Predictable Latency Mode: Not Supported 00:17:34.659 Traffic Based Keep ALive: Not Supported 00:17:34.659 Namespace Granularity: Not Supported 00:17:34.659 SQ Associations: Not Supported 00:17:34.659 UUID List: Not Supported 00:17:34.659 Multi-Domain Subsystem: Not Supported 00:17:34.659 Fixed Capacity Management: Not Supported 00:17:34.659 Variable Capacity Management: Not Supported 00:17:34.659 Delete Endurance Group: Not Supported 00:17:34.659 Delete NVM Set: Not Supported 00:17:34.659 Extended LBA Formats Supported: Not Supported 00:17:34.659 Flexible Data Placement Supported: Not Supported 00:17:34.659 00:17:34.659 Controller Memory Buffer Support 00:17:34.659 ================================ 00:17:34.659 Supported: No 00:17:34.659 00:17:34.659 Persistent Memory Region Support 00:17:34.659 ================================ 00:17:34.659 Supported: No 00:17:34.659 00:17:34.659 Admin Command Set Attributes 00:17:34.659 ============================ 00:17:34.659 Security Send/Receive: Not Supported 00:17:34.659 Format NVM: Not Supported 00:17:34.659 Firmware Activate/Download: Not Supported 00:17:34.659 Namespace Management: Not Supported 00:17:34.659 Device Self-Test: Not Supported 00:17:34.659 Directives: Not Supported 00:17:34.659 NVMe-MI: Not Supported 00:17:34.659 Virtualization Management: Not Supported 00:17:34.659 Doorbell Buffer Config: Not Supported 00:17:34.659 Get LBA Status Capability: Not Supported 00:17:34.659 Command & Feature Lockdown Capability: Not Supported 00:17:34.659 Abort Command Limit: 4 00:17:34.659 Async Event Request Limit: 4 00:17:34.659 Number of Firmware Slots: N/A 00:17:34.659 Firmware Slot 1 Read-Only: N/A 00:17:34.659 Firmware Activation Without Reset: [2024-12-16 11:37:39.976003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.659 [2024-12-16 11:37:39.980027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1100080) on tqpair=0x10bed90 00:17:34.659 N/A 00:17:34.659 Multiple Update Detection Support: N/A 00:17:34.659 Firmware Update Granularity: No Information Provided 00:17:34.659 Per-Namespace SMART Log: No 00:17:34.659 Asymmetric Namespace Access Log Page: Not Supported 00:17:34.659 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:34.659 Command Effects Log Page: Supported 00:17:34.659 Get Log Page Extended Data: Supported 00:17:34.659 Telemetry Log Pages: Not Supported 00:17:34.659 Persistent Event Log Pages: Not Supported 00:17:34.659 Supported Log Pages Log Page: May Support 00:17:34.659 Commands Supported & Effects Log Page: Not Supported 00:17:34.659 Feature Identifiers & Effects Log Page:May Support 00:17:34.659 NVMe-MI Commands & Effects Log Page: May Support 00:17:34.659 Data Area 4 for Telemetry Log: Not Supported 00:17:34.659 Error Log Page Entries Supported: 128 00:17:34.659 Keep Alive: Supported 00:17:34.659 Keep Alive Granularity: 10000 ms 00:17:34.659 00:17:34.659 NVM Command Set Attributes 00:17:34.659 ========================== 00:17:34.659 Submission Queue Entry Size 00:17:34.659 Max: 64 00:17:34.659 Min: 64 00:17:34.659 Completion Queue Entry Size 00:17:34.659 Max: 16 00:17:34.659 Min: 16 00:17:34.659 Number of Namespaces: 32 00:17:34.659 Compare Command: Supported 00:17:34.659 Write Uncorrectable Command: Not Supported 00:17:34.659 Dataset Management Command: Supported 00:17:34.659 Write Zeroes Command: Supported 00:17:34.659 Set Features Save Field: Not Supported 00:17:34.659 Reservations: Supported 00:17:34.659 Timestamp: Not Supported 00:17:34.659 Copy: Supported 00:17:34.659 Volatile Write Cache: Present 00:17:34.659 Atomic Write Unit (Normal): 1 00:17:34.659 Atomic Write Unit (PFail): 1 00:17:34.659 Atomic Compare & Write Unit: 1 00:17:34.659 Fused Compare & Write: Supported 00:17:34.659 Scatter-Gather List 00:17:34.659 SGL Command Set: Supported 00:17:34.659 SGL Keyed: Supported 00:17:34.659 SGL Bit Bucket Descriptor: Not Supported 00:17:34.659 SGL Metadata Pointer: Not Supported 00:17:34.659 Oversized SGL: Not Supported 00:17:34.659 SGL Metadata Address: Not Supported 00:17:34.659 SGL Offset: Supported 00:17:34.659 Transport SGL Data Block: Not Supported 00:17:34.659 Replay Protected Memory Block: Not Supported 00:17:34.659 00:17:34.659 Firmware Slot Information 00:17:34.660 ========================= 00:17:34.660 Active slot: 1 00:17:34.660 Slot 1 Firmware Revision: 25.01 00:17:34.660 00:17:34.660 00:17:34.660 Commands Supported and Effects 00:17:34.660 ============================== 00:17:34.660 Admin Commands 00:17:34.660 -------------- 00:17:34.660 Get Log Page (02h): Supported 00:17:34.660 Identify (06h): Supported 00:17:34.660 Abort (08h): Supported 00:17:34.660 Set Features (09h): Supported 00:17:34.660 Get Features (0Ah): Supported 00:17:34.660 Asynchronous Event Request (0Ch): Supported 00:17:34.660 Keep Alive (18h): Supported 00:17:34.660 I/O Commands 00:17:34.660 ------------ 00:17:34.660 Flush (00h): Supported LBA-Change 00:17:34.660 Write (01h): Supported LBA-Change 00:17:34.660 Read (02h): Supported 00:17:34.660 Compare (05h): Supported 00:17:34.660 Write Zeroes (08h): Supported LBA-Change 00:17:34.660 Dataset Management (09h): Supported LBA-Change 00:17:34.660 Copy (19h): Supported LBA-Change 00:17:34.660 00:17:34.660 Error Log 00:17:34.660 ========= 00:17:34.660 00:17:34.660 Arbitration 00:17:34.660 =========== 00:17:34.660 Arbitration Burst: 1 00:17:34.660 00:17:34.660 Power Management 00:17:34.660 ================ 00:17:34.660 Number of Power States: 1 00:17:34.660 Current Power State: Power State #0 00:17:34.660 Power State #0: 00:17:34.660 Max Power: 0.00 W 00:17:34.660 Non-Operational State: Operational 00:17:34.660 Entry Latency: Not Reported 00:17:34.660 Exit Latency: Not Reported 00:17:34.660 Relative Read Throughput: 0 00:17:34.660 Relative Read Latency: 0 00:17:34.660 Relative Write Throughput: 0 00:17:34.660 Relative Write Latency: 0 00:17:34.660 Idle Power: Not Reported 00:17:34.660 Active Power: Not Reported 00:17:34.660 Non-Operational Permissive Mode: Not Supported 00:17:34.660 00:17:34.660 Health Information 00:17:34.660 ================== 00:17:34.660 Critical Warnings: 00:17:34.660 Available Spare Space: OK 00:17:34.660 Temperature: OK 00:17:34.660 Device Reliability: OK 00:17:34.660 Read Only: No 00:17:34.660 Volatile Memory Backup: OK 00:17:34.660 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:34.660 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:34.660 Available Spare: 0% 00:17:34.660 Available Spare Threshold: 0% 00:17:34.660 Life Percentage Used:[2024-12-16 11:37:39.980141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10bed90) 00:17:34.660 [2024-12-16 11:37:39.980158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-12-16 11:37:39.980187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1100080, cid 7, qid 0 00:17:34.660 [2024-12-16 11:37:39.980267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.660 [2024-12-16 11:37:39.980275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.660 [2024-12-16 11:37:39.980279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1100080) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980324] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:34.660 [2024-12-16 11:37:39.980336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff600) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.660 [2024-12-16 11:37:39.980348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff780) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.660 [2024-12-16 11:37:39.980358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ff900) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.660 [2024-12-16 11:37:39.980368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.660 [2024-12-16 11:37:39.980382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.660 [2024-12-16 11:37:39.980399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-12-16 11:37:39.980423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.660 [2024-12-16 11:37:39.980478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.660 [2024-12-16 11:37:39.980486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.660 [2024-12-16 11:37:39.980490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.660 [2024-12-16 11:37:39.980518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-12-16 11:37:39.980540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.660 [2024-12-16 11:37:39.980614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.660 [2024-12-16 11:37:39.980621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.660 [2024-12-16 11:37:39.980625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980634] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:34.660 [2024-12-16 11:37:39.980639] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:34.660 [2024-12-16 11:37:39.980650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.660 [2024-12-16 11:37:39.980666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-12-16 11:37:39.980685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.660 [2024-12-16 11:37:39.980738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.660 [2024-12-16 11:37:39.980745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.660 [2024-12-16 11:37:39.980749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.660 [2024-12-16 11:37:39.980781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-12-16 11:37:39.980799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.660 [2024-12-16 11:37:39.980851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.660 [2024-12-16 11:37:39.980858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.660 [2024-12-16 11:37:39.980861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.660 [2024-12-16 11:37:39.980876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.660 [2024-12-16 11:37:39.980886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.660 [2024-12-16 11:37:39.980893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.980911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.980969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.980981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.980986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.980990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.981912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.981919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.981923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.981938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.981947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.981954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.981973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.982038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.982047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.982051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.982067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.982083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.982104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.982159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.982166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.982170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.982185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.982201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.982220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.661 [2024-12-16 11:37:39.982269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.661 [2024-12-16 11:37:39.982276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.661 [2024-12-16 11:37:39.982280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.661 [2024-12-16 11:37:39.982295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.661 [2024-12-16 11:37:39.982304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.661 [2024-12-16 11:37:39.982311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-12-16 11:37:39.982330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.982382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.982389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.982392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.982408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.982424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.982443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.982497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.982504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.982508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.982523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.982549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.982572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.982627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.982638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.982643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.982659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.982676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.982696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.982750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.982758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.982762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.982777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.982793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.982811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.982863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.982870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.982873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.982888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.982905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.982923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.982980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.982987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.982991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.982995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.983017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.983035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.983056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.983119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.983126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.983130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.983145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.983162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.983180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.983232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.983239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.983243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.983258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.983275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.983293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.983347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.983354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.983358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.983373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.983390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.983408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.983460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.983467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.983471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.983486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.983502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.983520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.983575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.983582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.983585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.983600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.983617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.983635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.983690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.983697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.983701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.662 [2024-12-16 11:37:39.983716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.662 [2024-12-16 11:37:39.983725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.662 [2024-12-16 11:37:39.983732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-12-16 11:37:39.983750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.662 [2024-12-16 11:37:39.983803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.662 [2024-12-16 11:37:39.983810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.662 [2024-12-16 11:37:39.983813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.983818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.663 [2024-12-16 11:37:39.983829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.983834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.983838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.663 [2024-12-16 11:37:39.983845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.663 [2024-12-16 11:37:39.983864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.663 [2024-12-16 11:37:39.983913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.663 [2024-12-16 11:37:39.983920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.663 [2024-12-16 11:37:39.983923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.983928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.663 [2024-12-16 11:37:39.983938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.983943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.983947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.663 [2024-12-16 11:37:39.983955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.663 [2024-12-16 11:37:39.983973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.663 [2024-12-16 11:37:39.988030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.663 [2024-12-16 11:37:39.988049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.663 [2024-12-16 11:37:39.988054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.988058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.663 [2024-12-16 11:37:39.988073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.988079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.988083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bed90) 00:17:34.663 [2024-12-16 11:37:39.988092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.663 [2024-12-16 11:37:39.988118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ffa80, cid 3, qid 0 00:17:34.663 [2024-12-16 11:37:39.988186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:34.663 [2024-12-16 11:37:39.988193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:34.663 [2024-12-16 11:37:39.988197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:34.663 [2024-12-16 11:37:39.988201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ffa80) on tqpair=0x10bed90 00:17:34.663 [2024-12-16 11:37:39.988210] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:34.663 0% 00:17:34.663 Data Units Read: 0 00:17:34.663 Data Units Written: 0 00:17:34.663 Host Read Commands: 0 00:17:34.663 Host Write Commands: 0 00:17:34.663 Controller Busy Time: 0 minutes 00:17:34.663 Power Cycles: 0 00:17:34.663 Power On Hours: 0 hours 00:17:34.663 Unsafe Shutdowns: 0 00:17:34.663 Unrecoverable Media Errors: 0 00:17:34.663 Lifetime Error Log Entries: 0 00:17:34.663 Warning Temperature Time: 0 minutes 00:17:34.663 Critical Temperature Time: 0 minutes 00:17:34.663 00:17:34.663 Number of Queues 00:17:34.663 ================ 00:17:34.663 Number of I/O Submission Queues: 127 00:17:34.663 Number of I/O Completion Queues: 127 00:17:34.663 00:17:34.663 Active Namespaces 00:17:34.663 ================= 00:17:34.663 Namespace ID:1 00:17:34.663 Error Recovery Timeout: Unlimited 00:17:34.663 Command Set Identifier: NVM (00h) 00:17:34.663 Deallocate: Supported 00:17:34.663 Deallocated/Unwritten Error: Not Supported 00:17:34.663 Deallocated Read Value: Unknown 00:17:34.663 Deallocate in Write Zeroes: Not Supported 00:17:34.663 Deallocated Guard Field: 0xFFFF 00:17:34.663 Flush: Supported 00:17:34.663 Reservation: Supported 00:17:34.663 Namespace Sharing Capabilities: Multiple Controllers 00:17:34.663 Size (in LBAs): 131072 (0GiB) 00:17:34.663 Capacity (in LBAs): 131072 (0GiB) 00:17:34.663 Utilization (in LBAs): 131072 (0GiB) 00:17:34.663 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:34.663 EUI64: ABCDEF0123456789 00:17:34.663 UUID: 3fb85ba5-d902-48cc-81ae-08ec33ff2cad 00:17:34.663 Thin Provisioning: Not Supported 00:17:34.663 Per-NS Atomic Units: Yes 00:17:34.663 Atomic Boundary Size (Normal): 0 00:17:34.663 Atomic Boundary Size (PFail): 0 00:17:34.663 Atomic Boundary Offset: 0 00:17:34.663 Maximum Single Source Range Length: 65535 00:17:34.663 Maximum Copy Length: 65535 00:17:34.663 Maximum Source Range Count: 1 00:17:34.663 NGUID/EUI64 Never Reused: No 00:17:34.663 Namespace Write Protected: No 00:17:34.663 Number of LBA Formats: 1 00:17:34.663 Current LBA Format: LBA Format #00 00:17:34.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:34.663 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.663 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.922 rmmod nvme_tcp 00:17:34.922 rmmod nvme_fabrics 00:17:34.922 rmmod nvme_keyring 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 88986 ']' 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 88986 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 88986 ']' 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 88986 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88986 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.922 killing process with pid 88986 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88986' 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 88986 00:17:34.922 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 88986 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.180 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:35.439 00:17:35.439 real 0m2.375s 00:17:35.439 user 0m4.984s 00:17:35.439 sys 0m0.799s 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.439 ************************************ 00:17:35.439 END TEST nvmf_identify 00:17:35.439 ************************************ 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.439 ************************************ 00:17:35.439 START TEST nvmf_perf 00:17:35.439 ************************************ 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:35.439 * Looking for test storage... 00:17:35.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.439 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:35.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.699 --rc genhtml_branch_coverage=1 00:17:35.699 --rc genhtml_function_coverage=1 00:17:35.699 --rc genhtml_legend=1 00:17:35.699 --rc geninfo_all_blocks=1 00:17:35.699 --rc geninfo_unexecuted_blocks=1 00:17:35.699 00:17:35.699 ' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:35.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.699 --rc genhtml_branch_coverage=1 00:17:35.699 --rc genhtml_function_coverage=1 00:17:35.699 --rc genhtml_legend=1 00:17:35.699 --rc geninfo_all_blocks=1 00:17:35.699 --rc geninfo_unexecuted_blocks=1 00:17:35.699 00:17:35.699 ' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:35.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.699 --rc genhtml_branch_coverage=1 00:17:35.699 --rc genhtml_function_coverage=1 00:17:35.699 --rc genhtml_legend=1 00:17:35.699 --rc geninfo_all_blocks=1 00:17:35.699 --rc geninfo_unexecuted_blocks=1 00:17:35.699 00:17:35.699 ' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:35.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.699 --rc genhtml_branch_coverage=1 00:17:35.699 --rc genhtml_function_coverage=1 00:17:35.699 --rc genhtml_legend=1 00:17:35.699 --rc geninfo_all_blocks=1 00:17:35.699 --rc geninfo_unexecuted_blocks=1 00:17:35.699 00:17:35.699 ' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.699 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.699 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:35.700 Cannot find device "nvmf_init_br" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:35.700 Cannot find device "nvmf_init_br2" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:35.700 Cannot find device "nvmf_tgt_br" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.700 Cannot find device "nvmf_tgt_br2" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:35.700 Cannot find device "nvmf_init_br" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:35.700 Cannot find device "nvmf_init_br2" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:35.700 Cannot find device "nvmf_tgt_br" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:35.700 Cannot find device "nvmf_tgt_br2" 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:35.700 11:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:35.700 Cannot find device "nvmf_br" 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:35.700 Cannot find device "nvmf_init_if" 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:35.700 Cannot find device "nvmf_init_if2" 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.700 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:35.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:35.959 00:17:35.959 --- 10.0.0.3 ping statistics --- 00:17:35.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.959 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:35.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:35.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:17:35.959 00:17:35.959 --- 10.0.0.4 ping statistics --- 00:17:35.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.959 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:17:35.959 00:17:35.959 --- 10.0.0.1 ping statistics --- 00:17:35.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.959 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:35.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:35.959 00:17:35.959 --- 10.0.0.2 ping statistics --- 00:17:35.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.959 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=89247 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 89247 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 89247 ']' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.959 11:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:35.959 [2024-12-16 11:37:41.347272] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:35.959 [2024-12-16 11:37:41.348072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.218 [2024-12-16 11:37:41.500417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.218 [2024-12-16 11:37:41.568963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.218 [2024-12-16 11:37:41.569309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.218 [2024-12-16 11:37:41.569503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.218 [2024-12-16 11:37:41.569779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.218 [2024-12-16 11:37:41.569822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.218 [2024-12-16 11:37:41.571267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.218 [2024-12-16 11:37:41.571372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.218 [2024-12-16 11:37:41.571446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.218 [2024-12-16 11:37:41.571564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:37.154 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:37.721 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:37.721 11:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:37.979 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:37.979 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:38.237 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:38.237 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:38.237 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:38.237 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:38.237 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:38.496 [2024-12-16 11:37:43.808602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.496 11:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.754 11:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:38.754 11:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.013 11:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:39.014 11:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:39.272 11:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:39.530 [2024-12-16 11:37:44.850038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:39.530 11:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:39.788 11:37:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:39.788 11:37:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:39.788 11:37:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:39.788 11:37:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:41.164 Initializing NVMe Controllers 00:17:41.164 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:41.164 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:41.164 Initialization complete. Launching workers. 00:17:41.164 ======================================================== 00:17:41.164 Latency(us) 00:17:41.164 Device Information : IOPS MiB/s Average min max 00:17:41.164 PCIE (0000:00:10.0) NSID 1 from core 0: 23421.38 91.49 1366.55 344.25 7785.81 00:17:41.164 ======================================================== 00:17:41.164 Total : 23421.38 91.49 1366.55 344.25 7785.81 00:17:41.164 00:17:41.164 11:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:42.540 Initializing NVMe Controllers 00:17:42.540 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:42.540 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:42.540 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:42.540 Initialization complete. Launching workers. 00:17:42.540 ======================================================== 00:17:42.540 Latency(us) 00:17:42.540 Device Information : IOPS MiB/s Average min max 00:17:42.540 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3534.82 13.81 282.53 107.25 4243.72 00:17:42.540 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.14 7005.99 11999.23 00:17:42.540 ======================================================== 00:17:42.540 Total : 3659.32 14.29 548.33 107.25 11999.23 00:17:42.540 00:17:42.540 11:37:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:43.916 Initializing NVMe Controllers 00:17:43.916 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:43.916 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:43.916 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:43.916 Initialization complete. Launching workers. 00:17:43.916 ======================================================== 00:17:43.916 Latency(us) 00:17:43.916 Device Information : IOPS MiB/s Average min max 00:17:43.917 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8744.17 34.16 3660.31 731.24 10793.46 00:17:43.917 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2730.69 10.67 11845.73 6228.23 28331.43 00:17:43.917 ======================================================== 00:17:43.917 Total : 11474.86 44.82 5608.20 731.24 28331.43 00:17:43.917 00:17:43.917 11:37:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:43.917 11:37:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:48.104 Initializing NVMe Controllers 00:17:48.104 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:48.105 Controller IO queue size 128, less than required. 00:17:48.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.105 Controller IO queue size 128, less than required. 00:17:48.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.105 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:48.105 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:48.105 Initialization complete. Launching workers. 00:17:48.105 ======================================================== 00:17:48.105 Latency(us) 00:17:48.105 Device Information : IOPS MiB/s Average min max 00:17:48.105 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1705.88 426.47 75678.17 45614.10 141316.24 00:17:48.105 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 481.62 120.40 361159.49 72117.23 1520921.78 00:17:48.105 ======================================================== 00:17:48.105 Total : 2187.50 546.88 138532.26 45614.10 1520921.78 00:17:48.105 00:17:48.105 11:37:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:48.105 Initializing NVMe Controllers 00:17:48.105 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:48.105 Controller IO queue size 128, less than required. 00:17:48.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.105 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:48.105 Controller IO queue size 128, less than required. 00:17:48.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.105 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:48.105 WARNING: Some requested NVMe devices were skipped 00:17:48.105 No valid NVMe controllers or AIO or URING devices found 00:17:48.105 11:37:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:50.637 Initializing NVMe Controllers 00:17:50.637 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.637 Controller IO queue size 128, less than required. 00:17:50.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.637 Controller IO queue size 128, less than required. 00:17:50.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.637 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:50.637 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:50.637 Initialization complete. Launching workers. 00:17:50.637 00:17:50.637 ==================== 00:17:50.637 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:50.637 TCP transport: 00:17:50.637 polls: 6925 00:17:50.637 idle_polls: 4160 00:17:50.637 sock_completions: 2765 00:17:50.637 nvme_completions: 5519 00:17:50.637 submitted_requests: 8248 00:17:50.637 queued_requests: 1 00:17:50.637 00:17:50.637 ==================== 00:17:50.637 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:50.637 TCP transport: 00:17:50.637 polls: 9547 00:17:50.637 idle_polls: 6759 00:17:50.637 sock_completions: 2788 00:17:50.637 nvme_completions: 5793 00:17:50.637 submitted_requests: 8716 00:17:50.637 queued_requests: 1 00:17:50.637 ======================================================== 00:17:50.637 Latency(us) 00:17:50.637 Device Information : IOPS MiB/s Average min max 00:17:50.637 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1379.37 344.84 94756.66 64695.47 166503.93 00:17:50.637 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1447.87 361.97 89069.03 39168.19 142343.74 00:17:50.637 ======================================================== 00:17:50.637 Total : 2827.24 706.81 91843.95 39168.19 166503.93 00:17:50.637 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:50.637 rmmod nvme_tcp 00:17:50.637 rmmod nvme_fabrics 00:17:50.637 rmmod nvme_keyring 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 89247 ']' 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 89247 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 89247 ']' 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 89247 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.637 11:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89247 00:17:50.637 killing process with pid 89247 00:17:50.637 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.637 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.637 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89247' 00:17:50.637 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 89247 00:17:50.637 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 89247 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.250 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:51.508 00:17:51.508 real 0m16.228s 00:17:51.508 user 0m59.298s 00:17:51.508 sys 0m3.773s 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.508 ************************************ 00:17:51.508 END TEST nvmf_perf 00:17:51.508 11:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:51.508 ************************************ 00:17:51.768 11:37:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:51.768 11:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:51.768 11:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.768 11:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.768 ************************************ 00:17:51.768 START TEST nvmf_fio_host 00:17:51.768 ************************************ 00:17:51.768 11:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:51.768 * Looking for test storage... 00:17:51.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.768 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.768 --rc genhtml_branch_coverage=1 00:17:51.768 --rc genhtml_function_coverage=1 00:17:51.768 --rc genhtml_legend=1 00:17:51.768 --rc geninfo_all_blocks=1 00:17:51.769 --rc geninfo_unexecuted_blocks=1 00:17:51.769 00:17:51.769 ' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.769 --rc genhtml_branch_coverage=1 00:17:51.769 --rc genhtml_function_coverage=1 00:17:51.769 --rc genhtml_legend=1 00:17:51.769 --rc geninfo_all_blocks=1 00:17:51.769 --rc geninfo_unexecuted_blocks=1 00:17:51.769 00:17:51.769 ' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.769 --rc genhtml_branch_coverage=1 00:17:51.769 --rc genhtml_function_coverage=1 00:17:51.769 --rc genhtml_legend=1 00:17:51.769 --rc geninfo_all_blocks=1 00:17:51.769 --rc geninfo_unexecuted_blocks=1 00:17:51.769 00:17:51.769 ' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.769 --rc genhtml_branch_coverage=1 00:17:51.769 --rc genhtml_function_coverage=1 00:17:51.769 --rc genhtml_legend=1 00:17:51.769 --rc geninfo_all_blocks=1 00:17:51.769 --rc geninfo_unexecuted_blocks=1 00:17:51.769 00:17:51.769 ' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.769 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:51.769 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.028 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.029 Cannot find device "nvmf_init_br" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.029 Cannot find device "nvmf_init_br2" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.029 Cannot find device "nvmf_tgt_br" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.029 Cannot find device "nvmf_tgt_br2" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.029 Cannot find device "nvmf_init_br" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.029 Cannot find device "nvmf_init_br2" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.029 Cannot find device "nvmf_tgt_br" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:52.029 Cannot find device "nvmf_tgt_br2" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:52.029 Cannot find device "nvmf_br" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:52.029 Cannot find device "nvmf_init_if" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:52.029 Cannot find device "nvmf_init_if2" 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:52.029 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:52.288 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.288 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:17:52.288 00:17:52.288 --- 10.0.0.3 ping statistics --- 00:17:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.288 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:52.288 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:52.288 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:52.288 00:17:52.288 --- 10.0.0.4 ping statistics --- 00:17:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.288 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:52.288 00:17:52.288 --- 10.0.0.1 ping statistics --- 00:17:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.288 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:52.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:52.288 00:17:52.288 --- 10.0.0.2 ping statistics --- 00:17:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.288 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89788 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89788 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 89788 ']' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.288 11:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.288 [2024-12-16 11:37:57.664464] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:17:52.289 [2024-12-16 11:37:57.664571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.547 [2024-12-16 11:37:57.819546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.547 [2024-12-16 11:37:57.882807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.547 [2024-12-16 11:37:57.882893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.547 [2024-12-16 11:37:57.882917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.547 [2024-12-16 11:37:57.882928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.547 [2024-12-16 11:37:57.882941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.547 [2024-12-16 11:37:57.884237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.547 [2024-12-16 11:37:57.884373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.547 [2024-12-16 11:37:57.884511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.547 [2024-12-16 11:37:57.884517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.806 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.806 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:52.806 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:53.064 [2024-12-16 11:37:58.310297] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.064 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:53.064 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.064 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.064 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:53.323 Malloc1 00:17:53.323 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.891 11:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.891 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:54.149 [2024-12-16 11:37:59.483176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.149 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:54.408 11:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:54.667 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:54.667 fio-3.35 00:17:54.667 Starting 1 thread 00:17:57.199 00:17:57.199 test: (groupid=0, jobs=1): err= 0: pid=89910: Mon Dec 16 11:38:02 2024 00:17:57.199 read: IOPS=8973, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec) 00:17:57.199 slat (nsec): min=1930, max=327875, avg=2575.91, stdev=3498.23 00:17:57.199 clat (usec): min=3243, max=13715, avg=7438.15, stdev=513.65 00:17:57.199 lat (usec): min=3284, max=13717, avg=7440.73, stdev=513.50 00:17:57.199 clat percentiles (usec): 00:17:57.199 | 1.00th=[ 6390], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:17:57.199 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:17:57.199 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:17:57.199 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10814], 99.95th=[12649], 00:17:57.199 | 99.99th=[13435] 00:17:57.199 bw ( KiB/s): min=35064, max=36612, per=99.85%, avg=35839.00, stdev=632.51, samples=4 00:17:57.199 iops : min= 8766, max= 9153, avg=8959.75, stdev=158.13, samples=4 00:17:57.199 write: IOPS=8997, BW=35.1MiB/s (36.9MB/s)(70.5MiB/2007msec); 0 zone resets 00:17:57.199 slat (usec): min=2, max=252, avg= 2.71, stdev= 2.45 00:17:57.199 clat (usec): min=2426, max=13264, avg=6747.03, stdev=461.92 00:17:57.199 lat (usec): min=2441, max=13266, avg=6749.73, stdev=461.83 00:17:57.199 clat percentiles (usec): 00:17:57.199 | 1.00th=[ 5800], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6390], 00:17:57.199 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:17:57.199 | 70.00th=[ 6980], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7373], 00:17:57.199 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[10683], 99.95th=[12387], 00:17:57.199 | 99.99th=[13173] 00:17:57.199 bw ( KiB/s): min=35824, max=36096, per=99.84%, avg=35932.00, stdev=126.91, samples=4 00:17:57.199 iops : min= 8956, max= 9024, avg=8983.00, stdev=31.73, samples=4 00:17:57.199 lat (msec) : 4=0.06%, 10=99.76%, 20=0.17% 00:17:57.199 cpu : usr=68.15%, sys=23.28%, ctx=12, majf=0, minf=6 00:17:57.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:57.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.199 issued rwts: total=18010,18057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.199 00:17:57.199 Run status group 0 (all jobs): 00:17:57.199 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:17:57.199 WRITE: bw=35.1MiB/s (36.9MB/s), 35.1MiB/s-35.1MiB/s (36.9MB/s-36.9MB/s), io=70.5MiB (74.0MB), run=2007-2007msec 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:57.199 11:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:57.199 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:57.199 fio-3.35 00:17:57.199 Starting 1 thread 00:17:59.732 00:17:59.732 test: (groupid=0, jobs=1): err= 0: pid=89954: Mon Dec 16 11:38:04 2024 00:17:59.732 read: IOPS=8081, BW=126MiB/s (132MB/s)(253MiB/2005msec) 00:17:59.732 slat (usec): min=3, max=137, avg= 3.81, stdev= 2.41 00:17:59.732 clat (usec): min=2720, max=17510, avg=9343.19, stdev=2276.94 00:17:59.732 lat (usec): min=2737, max=17513, avg=9347.00, stdev=2276.96 00:17:59.732 clat percentiles (usec): 00:17:59.732 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7242], 00:17:59.732 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10028], 00:17:59.732 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11994], 95.00th=[13173], 00:17:59.732 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16319], 99.95th=[16581], 00:17:59.732 | 99.99th=[16712] 00:17:59.732 bw ( KiB/s): min=57568, max=74144, per=50.40%, avg=65176.00, stdev=7788.76, samples=4 00:17:59.732 iops : min= 3598, max= 4634, avg=4073.50, stdev=486.80, samples=4 00:17:59.732 write: IOPS=4755, BW=74.3MiB/s (77.9MB/s)(134MiB/1801msec); 0 zone resets 00:17:59.732 slat (usec): min=33, max=593, avg=38.15, stdev= 9.01 00:17:59.732 clat (usec): min=3967, max=19337, avg=11394.63, stdev=2072.35 00:17:59.732 lat (usec): min=4002, max=19375, avg=11432.78, stdev=2072.56 00:17:59.732 clat percentiles (usec): 00:17:59.732 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:17:59.732 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:17:59.732 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14091], 95.00th=[15401], 00:17:59.732 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:17:59.732 | 99.99th=[19268] 00:17:59.732 bw ( KiB/s): min=59904, max=77184, per=89.52%, avg=68120.00, stdev=8208.12, samples=4 00:17:59.732 iops : min= 3744, max= 4824, avg=4257.50, stdev=513.01, samples=4 00:17:59.732 lat (msec) : 4=0.14%, 10=47.33%, 20=52.53% 00:17:59.732 cpu : usr=73.45%, sys=17.61%, ctx=8, majf=0, minf=17 00:17:59.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:17:59.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.732 issued rwts: total=16204,8565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.732 00:17:59.732 Run status group 0 (all jobs): 00:17:59.732 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (265MB), run=2005-2005msec 00:17:59.732 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=134MiB (140MB), run=1801-1801msec 00:17:59.732 11:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.732 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:59.732 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:59.732 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:59.990 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:59.990 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.990 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.991 rmmod nvme_tcp 00:17:59.991 rmmod nvme_fabrics 00:17:59.991 rmmod nvme_keyring 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 89788 ']' 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 89788 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 89788 ']' 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 89788 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89788 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.991 killing process with pid 89788 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89788' 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 89788 00:17:59.991 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 89788 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.249 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:00.508 00:18:00.508 real 0m8.851s 00:18:00.508 user 0m34.802s 00:18:00.508 sys 0m2.412s 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.508 ************************************ 00:18:00.508 END TEST nvmf_fio_host 00:18:00.508 ************************************ 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.508 ************************************ 00:18:00.508 START TEST nvmf_failover 00:18:00.508 ************************************ 00:18:00.508 11:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:00.768 * Looking for test storage... 00:18:00.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.768 11:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:00.768 11:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:18:00.768 11:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:00.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.768 --rc genhtml_branch_coverage=1 00:18:00.768 --rc genhtml_function_coverage=1 00:18:00.768 --rc genhtml_legend=1 00:18:00.768 --rc geninfo_all_blocks=1 00:18:00.768 --rc geninfo_unexecuted_blocks=1 00:18:00.768 00:18:00.768 ' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:00.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.768 --rc genhtml_branch_coverage=1 00:18:00.768 --rc genhtml_function_coverage=1 00:18:00.768 --rc genhtml_legend=1 00:18:00.768 --rc geninfo_all_blocks=1 00:18:00.768 --rc geninfo_unexecuted_blocks=1 00:18:00.768 00:18:00.768 ' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:00.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.768 --rc genhtml_branch_coverage=1 00:18:00.768 --rc genhtml_function_coverage=1 00:18:00.768 --rc genhtml_legend=1 00:18:00.768 --rc geninfo_all_blocks=1 00:18:00.768 --rc geninfo_unexecuted_blocks=1 00:18:00.768 00:18:00.768 ' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:00.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.768 --rc genhtml_branch_coverage=1 00:18:00.768 --rc genhtml_function_coverage=1 00:18:00.768 --rc genhtml_legend=1 00:18:00.768 --rc geninfo_all_blocks=1 00:18:00.768 --rc geninfo_unexecuted_blocks=1 00:18:00.768 00:18:00.768 ' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.768 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.769 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.769 Cannot find device "nvmf_init_br" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.769 Cannot find device "nvmf_init_br2" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.769 Cannot find device "nvmf_tgt_br" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.769 Cannot find device "nvmf_tgt_br2" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.769 Cannot find device "nvmf_init_br" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.769 Cannot find device "nvmf_init_br2" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.769 Cannot find device "nvmf_tgt_br" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.769 Cannot find device "nvmf_tgt_br2" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.769 Cannot find device "nvmf_br" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.769 Cannot find device "nvmf_init_if" 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:00.769 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:01.028 Cannot find device "nvmf_init_if2" 00:18:01.028 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:01.028 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:01.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:18:01.029 00:18:01.029 --- 10.0.0.3 ping statistics --- 00:18:01.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.029 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:01.029 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:01.029 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:18:01.029 00:18:01.029 --- 10.0.0.4 ping statistics --- 00:18:01.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.029 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:01.029 00:18:01.029 --- 10.0.0.1 ping statistics --- 00:18:01.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.029 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:01.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:01.029 00:18:01.029 --- 10.0.0.2 ping statistics --- 00:18:01.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.029 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.029 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=90225 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 90225 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90225 ']' 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.288 11:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:01.288 [2024-12-16 11:38:06.533474] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:18:01.288 [2024-12-16 11:38:06.534213] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.288 [2024-12-16 11:38:06.694242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:01.547 [2024-12-16 11:38:06.759210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.547 [2024-12-16 11:38:06.759293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.547 [2024-12-16 11:38:06.759307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.547 [2024-12-16 11:38:06.759318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.547 [2024-12-16 11:38:06.759327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.547 [2024-12-16 11:38:06.760606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.547 [2024-12-16 11:38:06.760725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.547 [2024-12-16 11:38:06.760734] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.483 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.483 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:02.483 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.483 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.483 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:02.483 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.483 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:02.483 [2024-12-16 11:38:07.871469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.742 11:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:03.001 Malloc0 00:18:03.001 11:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.260 11:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.519 11:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:03.777 [2024-12-16 11:38:09.038212] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.778 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:04.036 [2024-12-16 11:38:09.298360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:04.036 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:04.295 [2024-12-16 11:38:09.550554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90337 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90337 /var/tmp/bdevperf.sock 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90337 ']' 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.295 11:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:05.231 11:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.231 11:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:05.231 11:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:05.799 NVMe0n1 00:18:05.799 11:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:06.057 00:18:06.057 11:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90390 00:18:06.057 11:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.057 11:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:06.993 11:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:07.251 [2024-12-16 11:38:12.564118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 [2024-12-16 11:38:12.564432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ca90 is same with the state(6) to be set 00:18:07.251 11:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:10.532 11:38:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:10.790 00:18:10.790 11:38:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:11.049 [2024-12-16 11:38:16.215703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.215999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.049 [2024-12-16 11:38:16.216391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 [2024-12-16 11:38:16.216492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d540 is same with the state(6) to be set 00:18:11.050 11:38:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:14.333 11:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:14.333 [2024-12-16 11:38:19.524308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:14.333 11:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:15.268 11:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:15.528 [2024-12-16 11:38:20.827118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.528 [2024-12-16 11:38:20.827783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 [2024-12-16 11:38:20.827935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513f50 is same with the state(6) to be set 00:18:15.529 11:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 90390 00:18:22.126 { 00:18:22.126 "results": [ 00:18:22.126 { 00:18:22.126 "job": "NVMe0n1", 00:18:22.126 "core_mask": "0x1", 00:18:22.126 "workload": "verify", 00:18:22.126 "status": "finished", 00:18:22.126 "verify_range": { 00:18:22.126 "start": 0, 00:18:22.126 "length": 16384 00:18:22.126 }, 00:18:22.126 "queue_depth": 128, 00:18:22.126 "io_size": 4096, 00:18:22.126 "runtime": 15.012282, 00:18:22.126 "iops": 9234.37222935194, 00:18:22.126 "mibps": 36.07176652090602, 00:18:22.126 "io_failed": 2805, 00:18:22.126 "io_timeout": 0, 00:18:22.126 "avg_latency_us": 13554.982358517367, 00:18:22.126 "min_latency_us": 625.5709090909091, 00:18:22.126 "max_latency_us": 49092.42181818182 00:18:22.126 } 00:18:22.126 ], 00:18:22.126 "core_count": 1 00:18:22.126 } 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 90337 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90337 ']' 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90337 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90337 00:18:22.126 killing process with pid 90337 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90337' 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90337 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90337 00:18:22.126 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:22.126 [2024-12-16 11:38:09.616318] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:18:22.126 [2024-12-16 11:38:09.616415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90337 ] 00:18:22.126 [2024-12-16 11:38:09.764688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.126 [2024-12-16 11:38:09.829063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.126 Running I/O for 15 seconds... 00:18:22.126 9217.00 IOPS, 36.00 MiB/s [2024-12-16T11:38:27.540Z] [2024-12-16 11:38:12.565628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.126 [2024-12-16 11:38:12.565675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.126 [2024-12-16 11:38:12.565720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.126 [2024-12-16 11:38:12.565751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.126 [2024-12-16 11:38:12.565780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.126 [2024-12-16 11:38:12.565809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.126 [2024-12-16 11:38:12.565837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.126 [2024-12-16 11:38:12.565866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.126 [2024-12-16 11:38:12.565895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.126 [2024-12-16 11:38:12.565910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.565923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.565938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.565952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.565967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.565980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.566074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.566104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.566134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.566173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.566203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.127 [2024-12-16 11:38:12.566232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.566952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.566968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.127 [2024-12-16 11:38:12.567301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.127 [2024-12-16 11:38:12.567322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.567973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.567987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.128 [2024-12-16 11:38:12.568621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.128 [2024-12-16 11:38:12.568636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.129 [2024-12-16 11:38:12.568875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.568905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.568934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.568979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.568994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.129 [2024-12-16 11:38:12.569682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.129 [2024-12-16 11:38:12.569735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:18:22.129 [2024-12-16 11:38:12.569749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.129 [2024-12-16 11:38:12.569778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.129 [2024-12-16 11:38:12.569789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88176 len:8 PRP1 0x0 PRP2 0x0 00:18:22.129 [2024-12-16 11:38:12.569811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.129 [2024-12-16 11:38:12.569842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.129 [2024-12-16 11:38:12.569853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88184 len:8 PRP1 0x0 PRP2 0x0 00:18:22.129 [2024-12-16 11:38:12.569866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.129 [2024-12-16 11:38:12.569890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.129 [2024-12-16 11:38:12.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:18:22.129 [2024-12-16 11:38:12.569914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.129 [2024-12-16 11:38:12.569938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.129 [2024-12-16 11:38:12.569948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:18:22.129 [2024-12-16 11:38:12.569961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.129 [2024-12-16 11:38:12.569975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.130 [2024-12-16 11:38:12.569985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.130 [2024-12-16 11:38:12.569995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88208 len:8 PRP1 0x0 PRP2 0x0 00:18:22.130 [2024-12-16 11:38:12.570022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:12.570089] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:22.130 [2024-12-16 11:38:12.570149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.130 [2024-12-16 11:38:12.570172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:12.570188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.130 [2024-12-16 11:38:12.570201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:12.570215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.130 [2024-12-16 11:38:12.570229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:12.570248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.130 [2024-12-16 11:38:12.570263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:12.570276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:22.130 [2024-12-16 11:38:12.570322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f7f50 (9): Bad file descriptor 00:18:22.130 [2024-12-16 11:38:12.574191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:22.130 [2024-12-16 11:38:12.596275] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:22.130 9004.00 IOPS, 35.17 MiB/s [2024-12-16T11:38:27.544Z] 9284.00 IOPS, 36.27 MiB/s [2024-12-16T11:38:27.544Z] 9368.75 IOPS, 36.60 MiB/s [2024-12-16T11:38:27.544Z] [2024-12-16 11:38:16.218241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.130 [2024-12-16 11:38:16.218862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.218892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.218921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.218949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.218978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.218993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.219019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.219038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.219052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.219079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.219094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.130 [2024-12-16 11:38:16.219109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.130 [2024-12-16 11:38:16.219123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.219976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.219990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.131 [2024-12-16 11:38:16.220436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.131 [2024-12-16 11:38:16.220452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.220979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.220993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.132 [2024-12-16 11:38:16.221697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.132 [2024-12-16 11:38:16.221711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.221726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.133 [2024-12-16 11:38:16.221740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.221755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.133 [2024-12-16 11:38:16.221768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.221784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.133 [2024-12-16 11:38:16.221798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.221813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.133 [2024-12-16 11:38:16.221832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.221848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.133 [2024-12-16 11:38:16.221862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.221906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.221935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107432 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.221950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.221969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.221980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.221991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107440 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107448 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107456 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107464 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107472 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107480 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107488 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107496 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107504 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107512 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107520 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107528 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.222592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.222603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107536 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.222616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.222630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.236738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.236784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107544 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.236810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.236848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.133 [2024-12-16 11:38:16.236864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.133 [2024-12-16 11:38:16.236881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107552 len:8 PRP1 0x0 PRP2 0x0 00:18:22.133 [2024-12-16 11:38:16.236921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.237014] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:22.133 [2024-12-16 11:38:16.237135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.133 [2024-12-16 11:38:16.237168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.237192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.133 [2024-12-16 11:38:16.237212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.237232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.133 [2024-12-16 11:38:16.237251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.237272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.133 [2024-12-16 11:38:16.237290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:16.237311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:22.133 [2024-12-16 11:38:16.237366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f7f50 (9): Bad file descriptor 00:18:22.133 [2024-12-16 11:38:16.243257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:22.133 [2024-12-16 11:38:16.265134] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:22.133 9256.60 IOPS, 36.16 MiB/s [2024-12-16T11:38:27.547Z] 9230.83 IOPS, 36.06 MiB/s [2024-12-16T11:38:27.547Z] 9235.71 IOPS, 36.08 MiB/s [2024-12-16T11:38:27.547Z] 9257.12 IOPS, 36.16 MiB/s [2024-12-16T11:38:27.547Z] 9264.78 IOPS, 36.19 MiB/s [2024-12-16T11:38:27.547Z] [2024-12-16 11:38:20.828722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.133 [2024-12-16 11:38:20.828769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.133 [2024-12-16 11:38:20.828796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.828812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.828829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.828843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.828859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.828873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.828889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.828903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.828919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.828960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.828979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.828993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.829980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.829995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.830020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.830044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.134 [2024-12-16 11:38:20.830059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.830074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.134 [2024-12-16 11:38:20.830088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.830103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.134 [2024-12-16 11:38:20.830117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.830133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.134 [2024-12-16 11:38:20.830147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.134 [2024-12-16 11:38:20.830162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.830956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.830971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.830993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.831046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.831076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.831106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.831136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.831165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.831194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.135 [2024-12-16 11:38:20.831223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.831254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.135 [2024-12-16 11:38:20.831269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.135 [2024-12-16 11:38:20.831283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.831984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.831998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.136 [2024-12-16 11:38:20.832224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.136 [2024-12-16 11:38:20.832254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.136 [2024-12-16 11:38:20.832283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.136 [2024-12-16 11:38:20.832312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.136 [2024-12-16 11:38:20.832340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.136 [2024-12-16 11:38:20.832368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.136 [2024-12-16 11:38:20.832534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.136 [2024-12-16 11:38:20.832548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.137 [2024-12-16 11:38:20.832795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.137 [2024-12-16 11:38:20.832841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.137 [2024-12-16 11:38:20.832853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59288 len:8 PRP1 0x0 PRP2 0x0 00:18:22.137 [2024-12-16 11:38:20.832866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.832930] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:22.137 [2024-12-16 11:38:20.832990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.137 [2024-12-16 11:38:20.833026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.833053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.137 [2024-12-16 11:38:20.833067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.833082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.137 [2024-12-16 11:38:20.833159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.833176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.137 [2024-12-16 11:38:20.833190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.137 [2024-12-16 11:38:20.833204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:22.137 [2024-12-16 11:38:20.837072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:22.137 [2024-12-16 11:38:20.837115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f7f50 (9): Bad file descriptor 00:18:22.137 [2024-12-16 11:38:20.859543] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:22.137 9208.30 IOPS, 35.97 MiB/s [2024-12-16T11:38:27.551Z] 9195.55 IOPS, 35.92 MiB/s [2024-12-16T11:38:27.551Z] 9206.50 IOPS, 35.96 MiB/s [2024-12-16T11:38:27.551Z] 9220.77 IOPS, 36.02 MiB/s [2024-12-16T11:38:27.551Z] 9229.14 IOPS, 36.05 MiB/s [2024-12-16T11:38:27.551Z] 9235.40 IOPS, 36.08 MiB/s 00:18:22.137 Latency(us) 00:18:22.137 [2024-12-16T11:38:27.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.137 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:22.137 Verification LBA range: start 0x0 length 0x4000 00:18:22.137 NVMe0n1 : 15.01 9234.37 36.07 186.85 0.00 13554.98 625.57 49092.42 00:18:22.137 [2024-12-16T11:38:27.551Z] =================================================================================================================== 00:18:22.137 [2024-12-16T11:38:27.551Z] Total : 9234.37 36.07 186.85 0.00 13554.98 625.57 49092.42 00:18:22.137 Received shutdown signal, test time was about 15.000000 seconds 00:18:22.137 00:18:22.137 Latency(us) 00:18:22.137 [2024-12-16T11:38:27.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.137 [2024-12-16T11:38:27.551Z] =================================================================================================================== 00:18:22.137 [2024-12-16T11:38:27.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:22.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90594 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90594 /var/tmp/bdevperf.sock 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90594 ']' 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.137 11:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:22.137 11:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.137 11:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:22.137 11:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:22.137 [2024-12-16 11:38:27.328301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:22.137 11:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:22.396 [2024-12-16 11:38:27.612549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:22.396 11:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:22.654 NVMe0n1 00:18:22.654 11:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:22.913 00:18:22.913 11:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:23.480 00:18:23.480 11:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.480 11:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:23.739 11:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:23.997 11:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:27.283 11:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:27.283 11:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:27.283 11:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:27.284 11:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90717 00:18:27.284 11:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90717 00:18:28.239 { 00:18:28.239 "results": [ 00:18:28.239 { 00:18:28.239 "job": "NVMe0n1", 00:18:28.239 "core_mask": "0x1", 00:18:28.239 "workload": "verify", 00:18:28.239 "status": "finished", 00:18:28.239 "verify_range": { 00:18:28.239 "start": 0, 00:18:28.239 "length": 16384 00:18:28.239 }, 00:18:28.239 "queue_depth": 128, 00:18:28.239 "io_size": 4096, 00:18:28.239 "runtime": 1.007749, 00:18:28.239 "iops": 9486.489195226191, 00:18:28.239 "mibps": 37.05659841885231, 00:18:28.239 "io_failed": 0, 00:18:28.239 "io_timeout": 0, 00:18:28.239 "avg_latency_us": 13409.864642069228, 00:18:28.239 "min_latency_us": 1765.0036363636364, 00:18:28.239 "max_latency_us": 15371.17090909091 00:18:28.239 } 00:18:28.239 ], 00:18:28.239 "core_count": 1 00:18:28.239 } 00:18:28.239 11:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:28.239 [2024-12-16 11:38:26.714443] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:18:28.239 [2024-12-16 11:38:26.714550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90594 ] 00:18:28.239 [2024-12-16 11:38:26.857101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.239 [2024-12-16 11:38:26.907731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.239 [2024-12-16 11:38:29.141528] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:28.239 [2024-12-16 11:38:29.141657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.239 [2024-12-16 11:38:29.141684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.239 [2024-12-16 11:38:29.141703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.239 [2024-12-16 11:38:29.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.239 [2024-12-16 11:38:29.141732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.239 [2024-12-16 11:38:29.141761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.239 [2024-12-16 11:38:29.141775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.239 [2024-12-16 11:38:29.141789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.239 [2024-12-16 11:38:29.141803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:28.239 [2024-12-16 11:38:29.141869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:28.239 [2024-12-16 11:38:29.141901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b6f50 (9): Bad file descriptor 00:18:28.239 [2024-12-16 11:38:29.153498] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:28.239 Running I/O for 1 seconds... 00:18:28.239 9409.00 IOPS, 36.75 MiB/s 00:18:28.239 Latency(us) 00:18:28.239 [2024-12-16T11:38:33.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.240 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:28.240 Verification LBA range: start 0x0 length 0x4000 00:18:28.240 NVMe0n1 : 1.01 9486.49 37.06 0.00 0.00 13409.86 1765.00 15371.17 00:18:28.240 [2024-12-16T11:38:33.654Z] =================================================================================================================== 00:18:28.240 [2024-12-16T11:38:33.654Z] Total : 9486.49 37.06 0.00 0.00 13409.86 1765.00 15371.17 00:18:28.240 11:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.240 11:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:28.499 11:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:28.757 11:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.757 11:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:29.016 11:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:29.275 11:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:32.559 11:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:32.559 11:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:32.818 11:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90594 00:18:32.818 11:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90594 ']' 00:18:32.818 11:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90594 00:18:32.818 11:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:32.818 11:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.818 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90594 00:18:32.818 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.818 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.818 killing process with pid 90594 00:18:32.818 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90594' 00:18:32.818 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90594 00:18:32.818 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90594 00:18:33.075 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:33.075 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.334 rmmod nvme_tcp 00:18:33.334 rmmod nvme_fabrics 00:18:33.334 rmmod nvme_keyring 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 90225 ']' 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 90225 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90225 ']' 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90225 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90225 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90225' 00:18:33.334 killing process with pid 90225 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90225 00:18:33.334 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90225 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:33.604 11:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:33.604 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:33.878 00:18:33.878 real 0m33.352s 00:18:33.878 user 2m9.164s 00:18:33.878 sys 0m4.667s 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:33.878 ************************************ 00:18:33.878 END TEST nvmf_failover 00:18:33.878 ************************************ 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.878 ************************************ 00:18:33.878 START TEST nvmf_host_discovery 00:18:33.878 ************************************ 00:18:33.878 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:34.137 * Looking for test storage... 00:18:34.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.137 --rc genhtml_branch_coverage=1 00:18:34.137 --rc genhtml_function_coverage=1 00:18:34.137 --rc genhtml_legend=1 00:18:34.137 --rc geninfo_all_blocks=1 00:18:34.137 --rc geninfo_unexecuted_blocks=1 00:18:34.137 00:18:34.137 ' 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.137 --rc genhtml_branch_coverage=1 00:18:34.137 --rc genhtml_function_coverage=1 00:18:34.137 --rc genhtml_legend=1 00:18:34.137 --rc geninfo_all_blocks=1 00:18:34.137 --rc geninfo_unexecuted_blocks=1 00:18:34.137 00:18:34.137 ' 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.137 --rc genhtml_branch_coverage=1 00:18:34.137 --rc genhtml_function_coverage=1 00:18:34.137 --rc genhtml_legend=1 00:18:34.137 --rc geninfo_all_blocks=1 00:18:34.137 --rc geninfo_unexecuted_blocks=1 00:18:34.137 00:18:34.137 ' 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.137 --rc genhtml_branch_coverage=1 00:18:34.137 --rc genhtml_function_coverage=1 00:18:34.137 --rc genhtml_legend=1 00:18:34.137 --rc geninfo_all_blocks=1 00:18:34.137 --rc geninfo_unexecuted_blocks=1 00:18:34.137 00:18:34.137 ' 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.137 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.138 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:34.138 Cannot find device "nvmf_init_br" 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:34.138 Cannot find device "nvmf_init_br2" 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:34.138 Cannot find device "nvmf_tgt_br" 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.138 Cannot find device "nvmf_tgt_br2" 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:34.138 Cannot find device "nvmf_init_br" 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:34.138 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:34.397 Cannot find device "nvmf_init_br2" 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:34.397 Cannot find device "nvmf_tgt_br" 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:34.397 Cannot find device "nvmf_tgt_br2" 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:34.397 Cannot find device "nvmf_br" 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:34.397 Cannot find device "nvmf_init_if" 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:34.397 Cannot find device "nvmf_init_if2" 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:34.397 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:34.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:18:34.397 00:18:34.397 --- 10.0.0.3 ping statistics --- 00:18:34.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.398 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:34.398 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:34.656 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:34.656 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:18:34.656 00:18:34.656 --- 10.0.0.4 ping statistics --- 00:18:34.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.656 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:34.656 00:18:34.656 --- 10.0.0.1 ping statistics --- 00:18:34.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.656 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:34.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:34.656 00:18:34.656 --- 10.0.0.2 ping statistics --- 00:18:34.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.656 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=91078 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 91078 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 91078 ']' 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.656 11:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.656 [2024-12-16 11:38:39.921931] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:18:34.656 [2024-12-16 11:38:39.922066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.656 [2024-12-16 11:38:40.067174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.915 [2024-12-16 11:38:40.128518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.915 [2024-12-16 11:38:40.128579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.915 [2024-12-16 11:38:40.128589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.915 [2024-12-16 11:38:40.128596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.915 [2024-12-16 11:38:40.128603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.915 [2024-12-16 11:38:40.128988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 [2024-12-16 11:38:40.968775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 [2024-12-16 11:38:40.976855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 null0 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 null1 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.849 11:38:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.849 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91128 00:18:35.849 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:35.849 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91128 /tmp/host.sock 00:18:35.849 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 91128 ']' 00:18:35.849 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:35.849 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.850 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:35.850 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:35.850 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.850 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.850 [2024-12-16 11:38:41.072469] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:18:35.850 [2024-12-16 11:38:41.072575] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91128 ] 00:18:35.850 [2024-12-16 11:38:41.223649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.108 [2024-12-16 11:38:41.288640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.108 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:36.367 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.368 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 [2024-12-16 11:38:41.809126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.627 11:38:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.885 11:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:36.885 11:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:37.144 [2024-12-16 11:38:42.452137] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:37.144 [2024-12-16 11:38:42.452174] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:37.144 [2024-12-16 11:38:42.452210] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:37.144 [2024-12-16 11:38:42.538267] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:37.403 [2024-12-16 11:38:42.592658] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:37.403 [2024-12-16 11:38:42.593603] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x95aed0:1 started. 00:18:37.403 [2024-12-16 11:38:42.595688] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:37.403 [2024-12-16 11:38:42.595731] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:37.403 [2024-12-16 11:38:42.600605] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x95aed0 was disconnected and freed. delete nvme_qpair. 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.661 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:37.920 [2024-12-16 11:38:43.284320] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x95b480:1 started. 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.920 [2024-12-16 11:38:43.291313] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x95b480 was disconnected and freed. delete nvme_qpair. 00:18:37.920 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.179 [2024-12-16 11:38:43.393671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:38.179 [2024-12-16 11:38:43.394618] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:38.179 [2024-12-16 11:38:43.394672] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.179 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.180 [2024-12-16 11:38:43.480659] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.180 [2024-12-16 11:38:43.545142] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:38.180 [2024-12-16 11:38:43.545219] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:38.180 [2024-12-16 11:38:43.545230] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:38.180 [2024-12-16 11:38:43.545236] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:38.180 11:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.589 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.590 [2024-12-16 11:38:44.682616] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:39.590 [2024-12-16 11:38:44.682672] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:39.590 [2024-12-16 11:38:44.682944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.590 [2024-12-16 11:38:44.682983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.590 [2024-12-16 11:38:44.682998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.590 [2024-12-16 11:38:44.683021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.590 [2024-12-16 11:38:44.683033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.590 [2024-12-16 11:38:44.683042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.590 [2024-12-16 11:38:44.683052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.590 [2024-12-16 11:38:44.683062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.590 [2024-12-16 11:38:44.683071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.590 [2024-12-16 11:38:44.692882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.590 [2024-12-16 11:38:44.702918] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:39.590 [2024-12-16 11:38:44.702947] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:39.590 [2024-12-16 11:38:44.702955] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.702961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:39.590 [2024-12-16 11:38:44.702991] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.703107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.590 [2024-12-16 11:38:44.703131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9380b0 with addr=10.0.0.3, port=4420 00:18:39.590 [2024-12-16 11:38:44.703142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.590 [2024-12-16 11:38:44.703160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.590 [2024-12-16 11:38:44.703176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:39.590 [2024-12-16 11:38:44.703185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:39.590 [2024-12-16 11:38:44.703196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:39.590 [2024-12-16 11:38:44.703214] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:39.590 [2024-12-16 11:38:44.703220] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:39.590 [2024-12-16 11:38:44.703225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:39.590 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.590 [2024-12-16 11:38:44.713000] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:39.590 [2024-12-16 11:38:44.713048] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:39.590 [2024-12-16 11:38:44.713056] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.713061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:39.590 [2024-12-16 11:38:44.713090] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.713147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.590 [2024-12-16 11:38:44.713168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9380b0 with addr=10.0.0.3, port=4420 00:18:39.590 [2024-12-16 11:38:44.713179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.590 [2024-12-16 11:38:44.713195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.590 [2024-12-16 11:38:44.713210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:39.590 [2024-12-16 11:38:44.713218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:39.590 [2024-12-16 11:38:44.713228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:39.590 [2024-12-16 11:38:44.713236] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:39.590 [2024-12-16 11:38:44.713242] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:39.590 [2024-12-16 11:38:44.713246] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:39.590 [2024-12-16 11:38:44.723100] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:39.590 [2024-12-16 11:38:44.723141] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:39.590 [2024-12-16 11:38:44.723147] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.723152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:39.590 [2024-12-16 11:38:44.723204] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.723258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.590 [2024-12-16 11:38:44.723278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9380b0 with addr=10.0.0.3, port=4420 00:18:39.590 [2024-12-16 11:38:44.723288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.590 [2024-12-16 11:38:44.723304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.590 [2024-12-16 11:38:44.723318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:39.590 [2024-12-16 11:38:44.723327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:39.590 [2024-12-16 11:38:44.723336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:39.590 [2024-12-16 11:38:44.723344] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:39.590 [2024-12-16 11:38:44.723349] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:39.590 [2024-12-16 11:38:44.723354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:39.590 [2024-12-16 11:38:44.733215] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:39.590 [2024-12-16 11:38:44.733259] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:39.590 [2024-12-16 11:38:44.733266] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.733272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:39.590 [2024-12-16 11:38:44.733317] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:39.590 [2024-12-16 11:38:44.733403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.590 [2024-12-16 11:38:44.733424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9380b0 with addr=10.0.0.3, port=4420 00:18:39.590 [2024-12-16 11:38:44.733435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.590 [2024-12-16 11:38:44.733451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.591 [2024-12-16 11:38:44.733465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:39.591 [2024-12-16 11:38:44.733474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:39.591 [2024-12-16 11:38:44.733483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:39.591 [2024-12-16 11:38:44.733491] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:39.591 [2024-12-16 11:38:44.733497] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:39.591 [2024-12-16 11:38:44.733502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:39.591 [2024-12-16 11:38:44.743312] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:39.591 [2024-12-16 11:38:44.743367] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:39.591 [2024-12-16 11:38:44.743373] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:39.591 [2024-12-16 11:38:44.743395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:39.591 [2024-12-16 11:38:44.743439] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:39.591 [2024-12-16 11:38:44.743493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.591 [2024-12-16 11:38:44.743513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9380b0 with addr=10.0.0.3, port=4420 00:18:39.591 [2024-12-16 11:38:44.743523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.591 [2024-12-16 11:38:44.743539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.591 [2024-12-16 11:38:44.743553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:39.591 [2024-12-16 11:38:44.743562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:39.591 [2024-12-16 11:38:44.743571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:39.591 [2024-12-16 11:38:44.743579] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:39.591 [2024-12-16 11:38:44.743584] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:39.591 [2024-12-16 11:38:44.743589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.591 [2024-12-16 11:38:44.753450] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:39.591 [2024-12-16 11:38:44.753467] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:39.591 [2024-12-16 11:38:44.753473] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:39.591 [2024-12-16 11:38:44.753478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:39.591 [2024-12-16 11:38:44.753589] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:39.591 [2024-12-16 11:38:44.753656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.591 [2024-12-16 11:38:44.753742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9380b0 with addr=10.0.0.3, port=4420 00:18:39.591 [2024-12-16 11:38:44.753824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.591 [2024-12-16 11:38:44.753894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.591 [2024-12-16 11:38:44.753986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:39.591 [2024-12-16 11:38:44.754087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:39.591 [2024-12-16 11:38:44.754159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:39.591 [2024-12-16 11:38:44.754225] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:39.591 [2024-12-16 11:38:44.754236] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:39.591 [2024-12-16 11:38:44.754241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:39.591 [2024-12-16 11:38:44.763599] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:39.591 [2024-12-16 11:38:44.763629] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:39.591 [2024-12-16 11:38:44.763636] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:39.591 [2024-12-16 11:38:44.763642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:39.591 [2024-12-16 11:38:44.763754] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:39.591 [2024-12-16 11:38:44.763820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.591 [2024-12-16 11:38:44.763924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9380b0 with addr=10.0.0.3, port=4420 00:18:39.591 [2024-12-16 11:38:44.764019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9380b0 is same with the state(6) to be set 00:18:39.591 [2024-12-16 11:38:44.764110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9380b0 (9): Bad file descriptor 00:18:39.591 [2024-12-16 11:38:44.764198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:39.591 [2024-12-16 11:38:44.764291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:39.591 [2024-12-16 11:38:44.764350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:39.591 [2024-12-16 11:38:44.764434] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:39.591 [2024-12-16 11:38:44.764450] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:39.591 [2024-12-16 11:38:44.764456] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:39.591 [2024-12-16 11:38:44.768806] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:39.591 [2024-12-16 11:38:44.768840] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:39.591 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.592 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.592 11:38:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.592 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.850 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.851 11:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.786 [2024-12-16 11:38:46.122631] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:40.786 [2024-12-16 11:38:46.122837] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:40.786 [2024-12-16 11:38:46.122899] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:41.044 [2024-12-16 11:38:46.208738] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:41.044 [2024-12-16 11:38:46.267016] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:41.044 [2024-12-16 11:38:46.267567] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x935860:1 started. 00:18:41.044 [2024-12-16 11:38:46.269817] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:41.044 [2024-12-16 11:38:46.269856] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.044 [2024-12-16 11:38:46.271630] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x935860 was disconnected and freed. delete nvme_qpair. 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.044 2024/12/16 11:38:46 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:41.044 request: 00:18:41.044 { 00:18:41.044 "method": "bdev_nvme_start_discovery", 00:18:41.044 "params": { 00:18:41.044 "name": "nvme", 00:18:41.044 "trtype": "tcp", 00:18:41.044 "traddr": "10.0.0.3", 00:18:41.044 "adrfam": "ipv4", 00:18:41.044 "trsvcid": "8009", 00:18:41.044 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:41.044 "wait_for_attach": true 00:18:41.044 } 00:18:41.044 } 00:18:41.044 Got JSON-RPC error response 00:18:41.044 GoRPCClient: error on JSON-RPC call 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:41.044 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.045 2024/12/16 11:38:46 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:41.045 request: 00:18:41.045 { 00:18:41.045 "method": "bdev_nvme_start_discovery", 00:18:41.045 "params": { 00:18:41.045 "name": "nvme_second", 00:18:41.045 "trtype": "tcp", 00:18:41.045 "traddr": "10.0.0.3", 00:18:41.045 "adrfam": "ipv4", 00:18:41.045 "trsvcid": "8009", 00:18:41.045 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:41.045 "wait_for_attach": true 00:18:41.045 } 00:18:41.045 } 00:18:41.045 Got JSON-RPC error response 00:18:41.045 GoRPCClient: error on JSON-RPC call 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:41.045 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.303 11:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.239 [2024-12-16 11:38:47.534235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.239 [2024-12-16 11:38:47.534322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6170 with addr=10.0.0.3, port=8010 00:18:42.239 [2024-12-16 11:38:47.534349] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:42.239 [2024-12-16 11:38:47.534358] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:42.239 [2024-12-16 11:38:47.534368] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:43.174 [2024-12-16 11:38:48.534233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.174 [2024-12-16 11:38:48.534313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6170 with addr=10.0.0.3, port=8010 00:18:43.174 [2024-12-16 11:38:48.534341] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:43.174 [2024-12-16 11:38:48.534352] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:43.174 [2024-12-16 11:38:48.534362] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:44.547 [2024-12-16 11:38:49.534082] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:44.547 2024/12/16 11:38:49 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:44.547 request: 00:18:44.547 { 00:18:44.547 "method": "bdev_nvme_start_discovery", 00:18:44.547 "params": { 00:18:44.547 "name": "nvme_second", 00:18:44.547 "trtype": "tcp", 00:18:44.547 "traddr": "10.0.0.3", 00:18:44.547 "adrfam": "ipv4", 00:18:44.547 "trsvcid": "8010", 00:18:44.547 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:44.547 "wait_for_attach": false, 00:18:44.547 "attach_timeout_ms": 3000 00:18:44.547 } 00:18:44.547 } 00:18:44.547 Got JSON-RPC error response 00:18:44.547 GoRPCClient: error on JSON-RPC call 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91128 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.547 rmmod nvme_tcp 00:18:44.547 rmmod nvme_fabrics 00:18:44.547 rmmod nvme_keyring 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 91078 ']' 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 91078 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 91078 ']' 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 91078 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91078 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.547 killing process with pid 91078 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91078' 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 91078 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 91078 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.547 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.805 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.805 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.805 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.805 11:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:44.805 00:18:44.805 real 0m10.913s 00:18:44.805 user 0m20.748s 00:18:44.805 sys 0m1.755s 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.805 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.805 ************************************ 00:18:44.805 END TEST nvmf_host_discovery 00:18:44.805 ************************************ 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.064 ************************************ 00:18:45.064 START TEST nvmf_host_multipath_status 00:18:45.064 ************************************ 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:45.064 * Looking for test storage... 00:18:45.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.064 --rc genhtml_branch_coverage=1 00:18:45.064 --rc genhtml_function_coverage=1 00:18:45.064 --rc genhtml_legend=1 00:18:45.064 --rc geninfo_all_blocks=1 00:18:45.064 --rc geninfo_unexecuted_blocks=1 00:18:45.064 00:18:45.064 ' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.064 --rc genhtml_branch_coverage=1 00:18:45.064 --rc genhtml_function_coverage=1 00:18:45.064 --rc genhtml_legend=1 00:18:45.064 --rc geninfo_all_blocks=1 00:18:45.064 --rc geninfo_unexecuted_blocks=1 00:18:45.064 00:18:45.064 ' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.064 --rc genhtml_branch_coverage=1 00:18:45.064 --rc genhtml_function_coverage=1 00:18:45.064 --rc genhtml_legend=1 00:18:45.064 --rc geninfo_all_blocks=1 00:18:45.064 --rc geninfo_unexecuted_blocks=1 00:18:45.064 00:18:45.064 ' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.064 --rc genhtml_branch_coverage=1 00:18:45.064 --rc genhtml_function_coverage=1 00:18:45.064 --rc genhtml_legend=1 00:18:45.064 --rc geninfo_all_blocks=1 00:18:45.064 --rc geninfo_unexecuted_blocks=1 00:18:45.064 00:18:45.064 ' 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:18:45.064 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:45.065 Cannot find device "nvmf_init_br" 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:45.065 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:45.323 Cannot find device "nvmf_init_br2" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:45.323 Cannot find device "nvmf_tgt_br" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.323 Cannot find device "nvmf_tgt_br2" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:45.323 Cannot find device "nvmf_init_br" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:45.323 Cannot find device "nvmf_init_br2" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:45.323 Cannot find device "nvmf_tgt_br" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:45.323 Cannot find device "nvmf_tgt_br2" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:45.323 Cannot find device "nvmf_br" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:45.323 Cannot find device "nvmf_init_if" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:45.323 Cannot find device "nvmf_init_if2" 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.323 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.324 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:45.582 00:18:45.582 --- 10.0.0.3 ping statistics --- 00:18:45.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.582 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.582 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.582 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:18:45.582 00:18:45.582 --- 10.0.0.4 ping statistics --- 00:18:45.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.582 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:45.582 00:18:45.582 --- 10.0.0.1 ping statistics --- 00:18:45.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.582 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:45.582 00:18:45.582 --- 10.0.0.2 ping statistics --- 00:18:45.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.582 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=91649 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 91649 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 91649 ']' 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.582 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.583 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.583 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.583 11:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 [2024-12-16 11:38:50.902292] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:18:45.583 [2024-12-16 11:38:50.902384] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.840 [2024-12-16 11:38:51.056096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:45.840 [2024-12-16 11:38:51.112533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.840 [2024-12-16 11:38:51.112610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.840 [2024-12-16 11:38:51.112630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.840 [2024-12-16 11:38:51.112641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.840 [2024-12-16 11:38:51.112650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.840 [2024-12-16 11:38:51.113960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.840 [2024-12-16 11:38:51.113973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.840 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.840 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:45.840 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.841 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.841 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:46.098 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.098 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91649 00:18:46.098 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:46.356 [2024-12-16 11:38:51.525971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.356 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:46.614 Malloc0 00:18:46.614 11:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:46.872 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.130 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.387 [2024-12-16 11:38:52.687454] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.387 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:47.645 [2024-12-16 11:38:52.943623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91739 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91739 /var/tmp/bdevperf.sock 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 91739 ']' 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.645 11:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:48.211 11:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.211 11:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:48.211 11:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:48.469 11:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:48.726 Nvme0n1 00:18:48.726 11:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:49.292 Nvme0n1 00:18:49.292 11:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:49.292 11:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:51.189 11:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:51.189 11:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:51.450 11:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:51.708 11:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:52.640 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:52.640 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:52.640 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.640 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:53.205 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.205 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:53.205 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.205 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:53.463 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:53.463 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:53.463 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.463 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:53.721 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.721 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:53.721 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.721 11:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:53.979 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.979 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:53.979 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:53.979 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.279 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.279 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:54.279 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.279 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:54.536 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.536 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:54.536 11:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:54.794 11:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:55.052 11:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.427 11:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:56.685 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.685 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:56.685 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.685 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:56.943 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.943 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:56.943 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.943 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:57.201 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.201 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:57.201 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:57.201 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.458 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.458 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:57.458 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.716 11:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:57.973 11:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.973 11:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:57.973 11:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:58.230 11:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:58.488 11:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:59.421 11:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:59.421 11:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:59.421 11:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.421 11:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:59.679 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.679 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:59.679 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:59.679 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.936 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:59.936 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:59.936 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.936 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:00.194 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.194 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:00.194 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:00.194 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.452 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.452 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:00.452 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:00.452 11:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.017 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.017 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:01.017 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.017 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:01.275 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.275 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:01.275 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:01.532 11:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:01.790 11:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:02.723 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:02.723 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:02.723 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.723 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:02.982 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.982 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:02.982 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.982 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:03.240 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:03.240 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:03.240 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:03.240 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.804 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.804 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:03.805 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.805 11:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.062 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.062 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:04.062 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.062 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.319 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.319 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:04.319 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:04.319 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.577 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.577 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:04.577 11:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:04.836 11:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:05.093 11:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.465 11:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:06.722 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:06.722 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:06.722 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.722 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:06.981 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.981 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:06.981 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:06.981 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.546 11:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.111 11:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:08.111 11:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:08.111 11:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:08.111 11:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:08.676 11:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:09.609 11:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:09.610 11:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:09.610 11:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.610 11:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:09.867 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:09.867 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:09.867 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:09.867 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.128 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.128 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.128 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.128 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:10.396 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.396 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:10.396 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.396 11:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.963 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.963 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:10.963 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.963 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.221 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:11.221 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:11.221 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.221 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:11.480 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.480 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:11.738 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:11.738 11:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:11.996 11:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:12.255 11:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:13.191 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:13.191 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:13.191 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.191 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:13.449 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.449 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:13.449 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.449 11:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:14.016 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.016 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:14.016 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:14.016 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.274 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.274 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:14.274 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.274 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:14.533 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.533 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:14.533 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.533 11:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:14.791 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.791 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:14.791 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.791 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:15.049 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.049 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:15.049 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:15.307 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:15.566 11:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:16.540 11:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:16.540 11:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:16.540 11:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.540 11:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:16.799 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:16.799 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:16.799 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.799 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.057 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.057 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.057 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.057 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.315 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.315 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.315 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.315 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.574 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.574 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:17.574 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.574 11:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.142 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.142 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.142 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.142 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.401 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.402 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:18.402 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:18.660 11:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:18.918 11:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:19.854 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:19.854 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.854 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.854 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:20.111 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.111 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:20.111 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.111 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.678 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.678 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.678 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.678 11:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:20.678 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.678 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:20.678 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.678 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:20.936 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.936 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:20.936 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.936 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.504 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.504 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:21.504 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.504 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.762 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.762 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:21.762 11:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:22.021 11:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:22.280 11:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:23.213 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:23.213 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:23.214 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.214 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:23.779 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.779 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:23.779 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.779 11:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:24.038 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:24.038 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:24.038 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.038 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:24.296 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.296 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:24.296 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.296 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:24.554 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.554 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:24.554 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.554 11:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:24.813 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.813 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:24.813 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.813 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91739 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 91739 ']' 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 91739 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91739 00:19:25.071 killing process with pid 91739 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91739' 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 91739 00:19:25.071 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 91739 00:19:25.071 { 00:19:25.071 "results": [ 00:19:25.071 { 00:19:25.071 "job": "Nvme0n1", 00:19:25.071 "core_mask": "0x4", 00:19:25.071 "workload": "verify", 00:19:25.071 "status": "terminated", 00:19:25.071 "verify_range": { 00:19:25.071 "start": 0, 00:19:25.071 "length": 16384 00:19:25.071 }, 00:19:25.071 "queue_depth": 128, 00:19:25.071 "io_size": 4096, 00:19:25.071 "runtime": 35.898881, 00:19:25.071 "iops": 8554.389202270679, 00:19:25.071 "mibps": 33.41558282136984, 00:19:25.071 "io_failed": 0, 00:19:25.071 "io_timeout": 0, 00:19:25.071 "avg_latency_us": 14931.668037642135, 00:19:25.071 "min_latency_us": 181.52727272727273, 00:19:25.071 "max_latency_us": 4026531.84 00:19:25.071 } 00:19:25.071 ], 00:19:25.071 "core_count": 1 00:19:25.071 } 00:19:25.333 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91739 00:19:25.333 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.333 [2024-12-16 11:38:53.013070] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:19:25.333 [2024-12-16 11:38:53.013175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91739 ] 00:19:25.333 [2024-12-16 11:38:53.151916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.333 [2024-12-16 11:38:53.203467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.333 Running I/O for 90 seconds... 00:19:25.333 9473.00 IOPS, 37.00 MiB/s [2024-12-16T11:39:30.747Z] 9552.50 IOPS, 37.31 MiB/s [2024-12-16T11:39:30.747Z] 9511.33 IOPS, 37.15 MiB/s [2024-12-16T11:39:30.747Z] 9432.25 IOPS, 36.84 MiB/s [2024-12-16T11:39:30.747Z] 9376.80 IOPS, 36.63 MiB/s [2024-12-16T11:39:30.747Z] 9330.00 IOPS, 36.45 MiB/s [2024-12-16T11:39:30.747Z] 9293.43 IOPS, 36.30 MiB/s [2024-12-16T11:39:30.747Z] 9262.88 IOPS, 36.18 MiB/s [2024-12-16T11:39:30.747Z] 9238.22 IOPS, 36.09 MiB/s [2024-12-16T11:39:30.747Z] 9275.70 IOPS, 36.23 MiB/s [2024-12-16T11:39:30.747Z] 9277.00 IOPS, 36.24 MiB/s [2024-12-16T11:39:30.747Z] 9282.83 IOPS, 36.26 MiB/s [2024-12-16T11:39:30.747Z] 9290.77 IOPS, 36.29 MiB/s [2024-12-16T11:39:30.747Z] 9299.00 IOPS, 36.32 MiB/s [2024-12-16T11:39:30.747Z] 9297.80 IOPS, 36.32 MiB/s [2024-12-16T11:39:30.747Z] [2024-12-16 11:39:10.202634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.333 [2024-12-16 11:39:10.202706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.333 [2024-12-16 11:39:10.202796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.333 [2024-12-16 11:39:10.202845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.333 [2024-12-16 11:39:10.202872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.333 [2024-12-16 11:39:10.202889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.333 [2024-12-16 11:39:10.202911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.333 [2024-12-16 11:39:10.202926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.333 [2024-12-16 11:39:10.202947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.333 [2024-12-16 11:39:10.202962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.333 [2024-12-16 11:39:10.202983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.333 [2024-12-16 11:39:10.202999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.333 [2024-12-16 11:39:10.203019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.333 [2024-12-16 11:39:10.203052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.334 [2024-12-16 11:39:10.203089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.203972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.203992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.334 [2024-12-16 11:39:10.204547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.334 [2024-12-16 11:39:10.204568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.204589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.204626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.204662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.204697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.204733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.204768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.204803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.204838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.204890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.204925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.204946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.204962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.205356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.205412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.205450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.205489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.205528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.205567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.335 [2024-12-16 11:39:10.205606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.205965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.205989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.335 [2024-12-16 11:39:10.206454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.335 [2024-12-16 11:39:10.206469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.206971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.206996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.207980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.207996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.208029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.208045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.208070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.208099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.208126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.208143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.208172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.208188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.208214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.336 [2024-12-16 11:39:10.208244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.336 [2024-12-16 11:39:10.208273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:10.208289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:10.208312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:10.208327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:10.208351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:10.208367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:10.208533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:10.208555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.337 9105.25 IOPS, 35.57 MiB/s [2024-12-16T11:39:30.751Z] 8569.65 IOPS, 33.48 MiB/s [2024-12-16T11:39:30.751Z] 8093.56 IOPS, 31.62 MiB/s [2024-12-16T11:39:30.751Z] 7667.58 IOPS, 29.95 MiB/s [2024-12-16T11:39:30.751Z] 7436.85 IOPS, 29.05 MiB/s [2024-12-16T11:39:30.751Z] 7509.10 IOPS, 29.33 MiB/s [2024-12-16T11:39:30.751Z] 7571.05 IOPS, 29.57 MiB/s [2024-12-16T11:39:30.751Z] 7659.74 IOPS, 29.92 MiB/s [2024-12-16T11:39:30.751Z] 7841.58 IOPS, 30.63 MiB/s [2024-12-16T11:39:30.751Z] 7980.64 IOPS, 31.17 MiB/s [2024-12-16T11:39:30.751Z] 8108.42 IOPS, 31.67 MiB/s [2024-12-16T11:39:30.751Z] 8161.22 IOPS, 31.88 MiB/s [2024-12-16T11:39:30.751Z] 8191.39 IOPS, 32.00 MiB/s [2024-12-16T11:39:30.751Z] 8214.10 IOPS, 32.09 MiB/s [2024-12-16T11:39:30.751Z] 8274.47 IOPS, 32.32 MiB/s [2024-12-16T11:39:30.751Z] 8372.61 IOPS, 32.71 MiB/s [2024-12-16T11:39:30.751Z] 8460.25 IOPS, 33.05 MiB/s [2024-12-16T11:39:30.751Z] 8537.42 IOPS, 33.35 MiB/s [2024-12-16T11:39:30.751Z] [2024-12-16 11:39:27.546132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.546732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.546754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.546791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.547557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.547595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.547631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.547669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.337 [2024-12-16 11:39:27.547708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.547976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.547992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.548032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.548052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.548074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.548091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.548112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.337 [2024-12-16 11:39:27.548129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.337 [2024-12-16 11:39:27.548150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.338 [2024-12-16 11:39:27.548355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.338 [2024-12-16 11:39:27.548405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.338 [2024-12-16 11:39:27.548444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.338 [2024-12-16 11:39:27.548482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.548934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.548951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.338 [2024-12-16 11:39:27.549685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.338 [2024-12-16 11:39:27.549723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.549969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.549991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.550022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.338 [2024-12-16 11:39:27.550048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.338 [2024-12-16 11:39:27.550065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.339 [2024-12-16 11:39:27.550087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.339 [2024-12-16 11:39:27.550102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.339 [2024-12-16 11:39:27.550124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.339 [2024-12-16 11:39:27.550140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.339 [2024-12-16 11:39:27.550162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.339 [2024-12-16 11:39:27.550178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.339 [2024-12-16 11:39:27.550201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.339 [2024-12-16 11:39:27.550218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.339 [2024-12-16 11:39:27.550240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.339 [2024-12-16 11:39:27.550256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.339 8537.94 IOPS, 33.35 MiB/s [2024-12-16T11:39:30.753Z] 8542.71 IOPS, 33.37 MiB/s [2024-12-16T11:39:30.753Z] Received shutdown signal, test time was about 35.899692 seconds 00:19:25.339 00:19:25.339 Latency(us) 00:19:25.339 [2024-12-16T11:39:30.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.339 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.339 Verification LBA range: start 0x0 length 0x4000 00:19:25.339 Nvme0n1 : 35.90 8554.39 33.42 0.00 0.00 14931.67 181.53 4026531.84 00:19:25.339 [2024-12-16T11:39:30.753Z] =================================================================================================================== 00:19:25.339 [2024-12-16T11:39:30.753Z] Total : 8554.39 33.42 0.00 0.00 14931.67 181.53 4026531.84 00:19:25.339 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.598 11:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.598 rmmod nvme_tcp 00:19:25.856 rmmod nvme_fabrics 00:19:25.856 rmmod nvme_keyring 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 91649 ']' 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 91649 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 91649 ']' 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 91649 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91649 00:19:25.856 killing process with pid 91649 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91649' 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 91649 00:19:25.856 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 91649 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:26.115 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:26.374 00:19:26.374 real 0m41.374s 00:19:26.374 user 2m16.041s 00:19:26.374 sys 0m10.258s 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:26.374 ************************************ 00:19:26.374 END TEST nvmf_host_multipath_status 00:19:26.374 ************************************ 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.374 ************************************ 00:19:26.374 START TEST nvmf_discovery_remove_ifc 00:19:26.374 ************************************ 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:26.374 * Looking for test storage... 00:19:26.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.374 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.634 --rc genhtml_branch_coverage=1 00:19:26.634 --rc genhtml_function_coverage=1 00:19:26.634 --rc genhtml_legend=1 00:19:26.634 --rc geninfo_all_blocks=1 00:19:26.634 --rc geninfo_unexecuted_blocks=1 00:19:26.634 00:19:26.634 ' 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.634 --rc genhtml_branch_coverage=1 00:19:26.634 --rc genhtml_function_coverage=1 00:19:26.634 --rc genhtml_legend=1 00:19:26.634 --rc geninfo_all_blocks=1 00:19:26.634 --rc geninfo_unexecuted_blocks=1 00:19:26.634 00:19:26.634 ' 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.634 --rc genhtml_branch_coverage=1 00:19:26.634 --rc genhtml_function_coverage=1 00:19:26.634 --rc genhtml_legend=1 00:19:26.634 --rc geninfo_all_blocks=1 00:19:26.634 --rc geninfo_unexecuted_blocks=1 00:19:26.634 00:19:26.634 ' 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.634 --rc genhtml_branch_coverage=1 00:19:26.634 --rc genhtml_function_coverage=1 00:19:26.634 --rc genhtml_legend=1 00:19:26.634 --rc geninfo_all_blocks=1 00:19:26.634 --rc geninfo_unexecuted_blocks=1 00:19:26.634 00:19:26.634 ' 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.634 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.635 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:26.635 Cannot find device "nvmf_init_br" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:26.635 Cannot find device "nvmf_init_br2" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:26.635 Cannot find device "nvmf_tgt_br" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.635 Cannot find device "nvmf_tgt_br2" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:26.635 Cannot find device "nvmf_init_br" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:26.635 Cannot find device "nvmf_init_br2" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:26.635 Cannot find device "nvmf_tgt_br" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:26.635 Cannot find device "nvmf_tgt_br2" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:26.635 Cannot find device "nvmf_br" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:26.635 Cannot find device "nvmf_init_if" 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:26.635 11:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:26.635 Cannot find device "nvmf_init_if2" 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:26.635 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:26.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:26.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:26.895 00:19:26.895 --- 10.0.0.3 ping statistics --- 00:19:26.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.895 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:26.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:26.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:19:26.895 00:19:26.895 --- 10.0.0.4 ping statistics --- 00:19:26.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.895 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:26.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:26.895 00:19:26.895 --- 10.0.0.1 ping statistics --- 00:19:26.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.895 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:26.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:26.895 00:19:26.895 --- 10.0.0.2 ping statistics --- 00:19:26.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.895 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=93109 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 93109 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93109 ']' 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.895 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.154 [2024-12-16 11:39:32.346426] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:19:27.154 [2024-12-16 11:39:32.346549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.154 [2024-12-16 11:39:32.498701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.413 [2024-12-16 11:39:32.571423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.413 [2024-12-16 11:39:32.571504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.413 [2024-12-16 11:39:32.571518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.413 [2024-12-16 11:39:32.571529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.413 [2024-12-16 11:39:32.571538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.413 [2024-12-16 11:39:32.572035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.413 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.413 [2024-12-16 11:39:32.779120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.413 [2024-12-16 11:39:32.787371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:27.413 null0 00:19:27.413 [2024-12-16 11:39:32.819155] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=93145 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 93145 /tmp/host.sock 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93145 ']' 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.672 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.672 11:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.672 [2024-12-16 11:39:32.908949] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:19:27.672 [2024-12-16 11:39:32.909078] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93145 ] 00:19:27.672 [2024-12-16 11:39:33.060332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.931 [2024-12-16 11:39:33.135050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.931 11:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.325 [2024-12-16 11:39:34.342861] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:29.325 [2024-12-16 11:39:34.342927] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:29.325 [2024-12-16 11:39:34.342960] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:29.325 [2024-12-16 11:39:34.429030] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:29.325 [2024-12-16 11:39:34.483698] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:29.325 [2024-12-16 11:39:34.484860] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x22461f0:1 started. 00:19:29.325 [2024-12-16 11:39:34.486964] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:29.325 [2024-12-16 11:39:34.487039] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:29.325 [2024-12-16 11:39:34.487070] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:29.325 [2024-12-16 11:39:34.487088] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:29.325 [2024-12-16 11:39:34.487116] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:29.325 [2024-12-16 11:39:34.491127] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x22461f0 was disconnected and freed. delete nvme_qpair. 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:29.325 11:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.258 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.258 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.258 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.258 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.258 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.258 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.258 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.259 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.517 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:30.517 11:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:31.452 11:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.387 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.646 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:32.646 11:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:33.582 11:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.517 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.517 [2024-12-16 11:39:39.914406] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:34.517 [2024-12-16 11:39:39.914541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.517 [2024-12-16 11:39:39.914557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.517 [2024-12-16 11:39:39.914571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.517 [2024-12-16 11:39:39.914580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.517 [2024-12-16 11:39:39.914590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.517 [2024-12-16 11:39:39.914599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.517 [2024-12-16 11:39:39.914610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.517 [2024-12-16 11:39:39.914619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.517 [2024-12-16 11:39:39.914629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.518 [2024-12-16 11:39:39.914638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.518 [2024-12-16 11:39:39.914648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188290 is same with the state(6) to be set 00:19:34.518 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:34.518 11:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.518 [2024-12-16 11:39:39.924398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188290 (9): Bad file descriptor 00:19:34.775 [2024-12-16 11:39:39.934437] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:34.775 [2024-12-16 11:39:39.934484] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:34.776 [2024-12-16 11:39:39.934491] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:34.776 [2024-12-16 11:39:39.934513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:34.776 [2024-12-16 11:39:39.934560] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:35.776 11:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:35.776 11:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.776 11:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.776 11:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.776 11:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:35.776 11:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:35.776 11:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:35.776 [2024-12-16 11:39:40.998067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:35.776 [2024-12-16 11:39:40.998145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188290 with addr=10.0.0.3, port=4420 00:19:35.776 [2024-12-16 11:39:40.998173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188290 is same with the state(6) to be set 00:19:35.776 [2024-12-16 11:39:40.998227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188290 (9): Bad file descriptor 00:19:35.776 [2024-12-16 11:39:40.998771] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:35.776 [2024-12-16 11:39:40.998821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:35.776 [2024-12-16 11:39:40.998839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:35.776 [2024-12-16 11:39:40.998857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:35.776 [2024-12-16 11:39:40.998872] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:35.776 [2024-12-16 11:39:40.998905] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:35.776 [2024-12-16 11:39:40.998932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:35.776 [2024-12-16 11:39:40.998960] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:35.776 [2024-12-16 11:39:40.998970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:35.776 11:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.776 11:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:35.776 11:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:36.712 [2024-12-16 11:39:41.999044] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:36.712 [2024-12-16 11:39:41.999371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:36.712 [2024-12-16 11:39:41.999435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:36.712 [2024-12-16 11:39:41.999448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:36.712 [2024-12-16 11:39:41.999458] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:36.712 [2024-12-16 11:39:41.999469] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:36.712 [2024-12-16 11:39:41.999476] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:36.712 [2024-12-16 11:39:41.999481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:36.712 [2024-12-16 11:39:41.999517] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:36.712 [2024-12-16 11:39:41.999569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.712 [2024-12-16 11:39:41.999585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-12-16 11:39:41.999599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.712 [2024-12-16 11:39:41.999609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-12-16 11:39:41.999619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.712 [2024-12-16 11:39:41.999628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-12-16 11:39:41.999638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.712 [2024-12-16 11:39:41.999648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-12-16 11:39:41.999658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.712 [2024-12-16 11:39:41.999667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.712 [2024-12-16 11:39:41.999677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:36.712 [2024-12-16 11:39:41.999702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b4820 (9): Bad file descriptor 00:19:36.712 [2024-12-16 11:39:42.000383] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:36.712 [2024-12-16 11:39:42.000401] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.712 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.713 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.971 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:36.971 11:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:37.907 11:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:38.842 [2024-12-16 11:39:44.004464] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:38.842 [2024-12-16 11:39:44.004516] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:38.842 [2024-12-16 11:39:44.004537] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:38.842 [2024-12-16 11:39:44.090605] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:38.842 [2024-12-16 11:39:44.144961] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:38.842 [2024-12-16 11:39:44.145558] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x22008f0:1 started. 00:19:38.842 [2024-12-16 11:39:44.146913] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:38.842 [2024-12-16 11:39:44.146970] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:38.842 [2024-12-16 11:39:44.146996] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:38.842 [2024-12-16 11:39:44.147023] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:38.842 [2024-12-16 11:39:44.147034] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:38.842 [2024-12-16 11:39:44.153025] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x22008f0 was disconnected and freed. delete nvme_qpair. 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:38.842 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 93145 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93145 ']' 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93145 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93145 00:19:39.101 killing process with pid 93145 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93145' 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93145 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93145 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:39.101 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:39.359 rmmod nvme_tcp 00:19:39.359 rmmod nvme_fabrics 00:19:39.359 rmmod nvme_keyring 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 93109 ']' 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 93109 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93109 ']' 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93109 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93109 00:19:39.359 killing process with pid 93109 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93109' 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93109 00:19:39.359 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93109 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:39.618 11:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:39.618 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:39.618 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:39.877 00:19:39.877 real 0m13.465s 00:19:39.877 user 0m23.639s 00:19:39.877 sys 0m1.758s 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.877 ************************************ 00:19:39.877 END TEST nvmf_discovery_remove_ifc 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.877 ************************************ 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.877 ************************************ 00:19:39.877 START TEST nvmf_identify_kernel_target 00:19:39.877 ************************************ 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:39.877 * Looking for test storage... 00:19:39.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:39.877 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.136 --rc genhtml_branch_coverage=1 00:19:40.136 --rc genhtml_function_coverage=1 00:19:40.136 --rc genhtml_legend=1 00:19:40.136 --rc geninfo_all_blocks=1 00:19:40.136 --rc geninfo_unexecuted_blocks=1 00:19:40.136 00:19:40.136 ' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.136 --rc genhtml_branch_coverage=1 00:19:40.136 --rc genhtml_function_coverage=1 00:19:40.136 --rc genhtml_legend=1 00:19:40.136 --rc geninfo_all_blocks=1 00:19:40.136 --rc geninfo_unexecuted_blocks=1 00:19:40.136 00:19:40.136 ' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.136 --rc genhtml_branch_coverage=1 00:19:40.136 --rc genhtml_function_coverage=1 00:19:40.136 --rc genhtml_legend=1 00:19:40.136 --rc geninfo_all_blocks=1 00:19:40.136 --rc geninfo_unexecuted_blocks=1 00:19:40.136 00:19:40.136 ' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.136 --rc genhtml_branch_coverage=1 00:19:40.136 --rc genhtml_function_coverage=1 00:19:40.136 --rc genhtml_legend=1 00:19:40.136 --rc geninfo_all_blocks=1 00:19:40.136 --rc geninfo_unexecuted_blocks=1 00:19:40.136 00:19:40.136 ' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.136 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:40.136 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:40.137 Cannot find device "nvmf_init_br" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:40.137 Cannot find device "nvmf_init_br2" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:40.137 Cannot find device "nvmf_tgt_br" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.137 Cannot find device "nvmf_tgt_br2" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:40.137 Cannot find device "nvmf_init_br" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:40.137 Cannot find device "nvmf_init_br2" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:40.137 Cannot find device "nvmf_tgt_br" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:40.137 Cannot find device "nvmf_tgt_br2" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:40.137 Cannot find device "nvmf_br" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:40.137 Cannot find device "nvmf_init_if" 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:40.137 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:40.137 Cannot find device "nvmf_init_if2" 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:40.395 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.395 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:40.395 00:19:40.395 --- 10.0.0.3 ping statistics --- 00:19:40.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.395 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:40.395 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:40.395 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:40.395 00:19:40.395 --- 10.0.0.4 ping statistics --- 00:19:40.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.395 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:40.395 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:40.654 00:19:40.654 --- 10.0.0.1 ping statistics --- 00:19:40.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.654 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:40.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:19:40.654 00:19:40.654 --- 10.0.0.2 ping statistics --- 00:19:40.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.654 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:40.654 11:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:40.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:40.913 Waiting for block devices as requested 00:19:40.913 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.171 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:41.171 No valid GPT data, bailing 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:41.171 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:41.171 No valid GPT data, bailing 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:41.430 No valid GPT data, bailing 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:41.430 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:41.430 No valid GPT data, bailing 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -a 10.0.0.1 -t tcp -s 4420 00:19:41.431 00:19:41.431 Discovery Log Number of Records 2, Generation counter 2 00:19:41.431 =====Discovery Log Entry 0====== 00:19:41.431 trtype: tcp 00:19:41.431 adrfam: ipv4 00:19:41.431 subtype: current discovery subsystem 00:19:41.431 treq: not specified, sq flow control disable supported 00:19:41.431 portid: 1 00:19:41.431 trsvcid: 4420 00:19:41.431 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:41.431 traddr: 10.0.0.1 00:19:41.431 eflags: none 00:19:41.431 sectype: none 00:19:41.431 =====Discovery Log Entry 1====== 00:19:41.431 trtype: tcp 00:19:41.431 adrfam: ipv4 00:19:41.431 subtype: nvme subsystem 00:19:41.431 treq: not specified, sq flow control disable supported 00:19:41.431 portid: 1 00:19:41.431 trsvcid: 4420 00:19:41.431 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:41.431 traddr: 10.0.0.1 00:19:41.431 eflags: none 00:19:41.431 sectype: none 00:19:41.431 11:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:41.431 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:41.690 ===================================================== 00:19:41.690 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:41.690 ===================================================== 00:19:41.690 Controller Capabilities/Features 00:19:41.690 ================================ 00:19:41.690 Vendor ID: 0000 00:19:41.690 Subsystem Vendor ID: 0000 00:19:41.690 Serial Number: 4377fcfe928eef922981 00:19:41.690 Model Number: Linux 00:19:41.690 Firmware Version: 6.8.9-20 00:19:41.690 Recommended Arb Burst: 0 00:19:41.690 IEEE OUI Identifier: 00 00 00 00:19:41.690 Multi-path I/O 00:19:41.690 May have multiple subsystem ports: No 00:19:41.690 May have multiple controllers: No 00:19:41.690 Associated with SR-IOV VF: No 00:19:41.690 Max Data Transfer Size: Unlimited 00:19:41.690 Max Number of Namespaces: 0 00:19:41.690 Max Number of I/O Queues: 1024 00:19:41.690 NVMe Specification Version (VS): 1.3 00:19:41.690 NVMe Specification Version (Identify): 1.3 00:19:41.690 Maximum Queue Entries: 1024 00:19:41.690 Contiguous Queues Required: No 00:19:41.690 Arbitration Mechanisms Supported 00:19:41.690 Weighted Round Robin: Not Supported 00:19:41.690 Vendor Specific: Not Supported 00:19:41.690 Reset Timeout: 7500 ms 00:19:41.690 Doorbell Stride: 4 bytes 00:19:41.690 NVM Subsystem Reset: Not Supported 00:19:41.690 Command Sets Supported 00:19:41.690 NVM Command Set: Supported 00:19:41.690 Boot Partition: Not Supported 00:19:41.690 Memory Page Size Minimum: 4096 bytes 00:19:41.690 Memory Page Size Maximum: 4096 bytes 00:19:41.690 Persistent Memory Region: Not Supported 00:19:41.690 Optional Asynchronous Events Supported 00:19:41.690 Namespace Attribute Notices: Not Supported 00:19:41.690 Firmware Activation Notices: Not Supported 00:19:41.690 ANA Change Notices: Not Supported 00:19:41.690 PLE Aggregate Log Change Notices: Not Supported 00:19:41.690 LBA Status Info Alert Notices: Not Supported 00:19:41.690 EGE Aggregate Log Change Notices: Not Supported 00:19:41.690 Normal NVM Subsystem Shutdown event: Not Supported 00:19:41.690 Zone Descriptor Change Notices: Not Supported 00:19:41.690 Discovery Log Change Notices: Supported 00:19:41.690 Controller Attributes 00:19:41.690 128-bit Host Identifier: Not Supported 00:19:41.690 Non-Operational Permissive Mode: Not Supported 00:19:41.690 NVM Sets: Not Supported 00:19:41.690 Read Recovery Levels: Not Supported 00:19:41.690 Endurance Groups: Not Supported 00:19:41.690 Predictable Latency Mode: Not Supported 00:19:41.690 Traffic Based Keep ALive: Not Supported 00:19:41.690 Namespace Granularity: Not Supported 00:19:41.690 SQ Associations: Not Supported 00:19:41.690 UUID List: Not Supported 00:19:41.690 Multi-Domain Subsystem: Not Supported 00:19:41.690 Fixed Capacity Management: Not Supported 00:19:41.690 Variable Capacity Management: Not Supported 00:19:41.690 Delete Endurance Group: Not Supported 00:19:41.690 Delete NVM Set: Not Supported 00:19:41.690 Extended LBA Formats Supported: Not Supported 00:19:41.690 Flexible Data Placement Supported: Not Supported 00:19:41.690 00:19:41.690 Controller Memory Buffer Support 00:19:41.690 ================================ 00:19:41.690 Supported: No 00:19:41.690 00:19:41.690 Persistent Memory Region Support 00:19:41.690 ================================ 00:19:41.691 Supported: No 00:19:41.691 00:19:41.691 Admin Command Set Attributes 00:19:41.691 ============================ 00:19:41.691 Security Send/Receive: Not Supported 00:19:41.691 Format NVM: Not Supported 00:19:41.691 Firmware Activate/Download: Not Supported 00:19:41.691 Namespace Management: Not Supported 00:19:41.691 Device Self-Test: Not Supported 00:19:41.691 Directives: Not Supported 00:19:41.691 NVMe-MI: Not Supported 00:19:41.691 Virtualization Management: Not Supported 00:19:41.691 Doorbell Buffer Config: Not Supported 00:19:41.691 Get LBA Status Capability: Not Supported 00:19:41.691 Command & Feature Lockdown Capability: Not Supported 00:19:41.691 Abort Command Limit: 1 00:19:41.691 Async Event Request Limit: 1 00:19:41.691 Number of Firmware Slots: N/A 00:19:41.691 Firmware Slot 1 Read-Only: N/A 00:19:41.691 Firmware Activation Without Reset: N/A 00:19:41.691 Multiple Update Detection Support: N/A 00:19:41.691 Firmware Update Granularity: No Information Provided 00:19:41.691 Per-Namespace SMART Log: No 00:19:41.691 Asymmetric Namespace Access Log Page: Not Supported 00:19:41.691 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:41.691 Command Effects Log Page: Not Supported 00:19:41.691 Get Log Page Extended Data: Supported 00:19:41.691 Telemetry Log Pages: Not Supported 00:19:41.691 Persistent Event Log Pages: Not Supported 00:19:41.691 Supported Log Pages Log Page: May Support 00:19:41.691 Commands Supported & Effects Log Page: Not Supported 00:19:41.691 Feature Identifiers & Effects Log Page:May Support 00:19:41.691 NVMe-MI Commands & Effects Log Page: May Support 00:19:41.691 Data Area 4 for Telemetry Log: Not Supported 00:19:41.691 Error Log Page Entries Supported: 1 00:19:41.691 Keep Alive: Not Supported 00:19:41.691 00:19:41.691 NVM Command Set Attributes 00:19:41.691 ========================== 00:19:41.691 Submission Queue Entry Size 00:19:41.691 Max: 1 00:19:41.691 Min: 1 00:19:41.691 Completion Queue Entry Size 00:19:41.691 Max: 1 00:19:41.691 Min: 1 00:19:41.691 Number of Namespaces: 0 00:19:41.691 Compare Command: Not Supported 00:19:41.691 Write Uncorrectable Command: Not Supported 00:19:41.691 Dataset Management Command: Not Supported 00:19:41.691 Write Zeroes Command: Not Supported 00:19:41.691 Set Features Save Field: Not Supported 00:19:41.691 Reservations: Not Supported 00:19:41.691 Timestamp: Not Supported 00:19:41.691 Copy: Not Supported 00:19:41.691 Volatile Write Cache: Not Present 00:19:41.691 Atomic Write Unit (Normal): 1 00:19:41.691 Atomic Write Unit (PFail): 1 00:19:41.691 Atomic Compare & Write Unit: 1 00:19:41.691 Fused Compare & Write: Not Supported 00:19:41.691 Scatter-Gather List 00:19:41.691 SGL Command Set: Supported 00:19:41.691 SGL Keyed: Not Supported 00:19:41.691 SGL Bit Bucket Descriptor: Not Supported 00:19:41.691 SGL Metadata Pointer: Not Supported 00:19:41.691 Oversized SGL: Not Supported 00:19:41.691 SGL Metadata Address: Not Supported 00:19:41.691 SGL Offset: Supported 00:19:41.691 Transport SGL Data Block: Not Supported 00:19:41.691 Replay Protected Memory Block: Not Supported 00:19:41.691 00:19:41.691 Firmware Slot Information 00:19:41.691 ========================= 00:19:41.691 Active slot: 0 00:19:41.691 00:19:41.691 00:19:41.691 Error Log 00:19:41.691 ========= 00:19:41.691 00:19:41.691 Active Namespaces 00:19:41.691 ================= 00:19:41.691 Discovery Log Page 00:19:41.691 ================== 00:19:41.691 Generation Counter: 2 00:19:41.691 Number of Records: 2 00:19:41.691 Record Format: 0 00:19:41.691 00:19:41.691 Discovery Log Entry 0 00:19:41.691 ---------------------- 00:19:41.691 Transport Type: 3 (TCP) 00:19:41.691 Address Family: 1 (IPv4) 00:19:41.691 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:41.691 Entry Flags: 00:19:41.691 Duplicate Returned Information: 0 00:19:41.691 Explicit Persistent Connection Support for Discovery: 0 00:19:41.691 Transport Requirements: 00:19:41.691 Secure Channel: Not Specified 00:19:41.691 Port ID: 1 (0x0001) 00:19:41.691 Controller ID: 65535 (0xffff) 00:19:41.691 Admin Max SQ Size: 32 00:19:41.691 Transport Service Identifier: 4420 00:19:41.691 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:41.691 Transport Address: 10.0.0.1 00:19:41.691 Discovery Log Entry 1 00:19:41.691 ---------------------- 00:19:41.691 Transport Type: 3 (TCP) 00:19:41.691 Address Family: 1 (IPv4) 00:19:41.691 Subsystem Type: 2 (NVM Subsystem) 00:19:41.691 Entry Flags: 00:19:41.691 Duplicate Returned Information: 0 00:19:41.691 Explicit Persistent Connection Support for Discovery: 0 00:19:41.691 Transport Requirements: 00:19:41.691 Secure Channel: Not Specified 00:19:41.691 Port ID: 1 (0x0001) 00:19:41.691 Controller ID: 65535 (0xffff) 00:19:41.691 Admin Max SQ Size: 32 00:19:41.691 Transport Service Identifier: 4420 00:19:41.691 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:41.691 Transport Address: 10.0.0.1 00:19:41.691 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:41.950 get_feature(0x01) failed 00:19:41.950 get_feature(0x02) failed 00:19:41.950 get_feature(0x04) failed 00:19:41.950 ===================================================== 00:19:41.950 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:41.950 ===================================================== 00:19:41.950 Controller Capabilities/Features 00:19:41.950 ================================ 00:19:41.950 Vendor ID: 0000 00:19:41.950 Subsystem Vendor ID: 0000 00:19:41.950 Serial Number: a4e5de96816608875e1e 00:19:41.950 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:41.950 Firmware Version: 6.8.9-20 00:19:41.950 Recommended Arb Burst: 6 00:19:41.950 IEEE OUI Identifier: 00 00 00 00:19:41.950 Multi-path I/O 00:19:41.950 May have multiple subsystem ports: Yes 00:19:41.950 May have multiple controllers: Yes 00:19:41.950 Associated with SR-IOV VF: No 00:19:41.950 Max Data Transfer Size: Unlimited 00:19:41.950 Max Number of Namespaces: 1024 00:19:41.950 Max Number of I/O Queues: 128 00:19:41.950 NVMe Specification Version (VS): 1.3 00:19:41.950 NVMe Specification Version (Identify): 1.3 00:19:41.950 Maximum Queue Entries: 1024 00:19:41.950 Contiguous Queues Required: No 00:19:41.950 Arbitration Mechanisms Supported 00:19:41.950 Weighted Round Robin: Not Supported 00:19:41.950 Vendor Specific: Not Supported 00:19:41.950 Reset Timeout: 7500 ms 00:19:41.950 Doorbell Stride: 4 bytes 00:19:41.950 NVM Subsystem Reset: Not Supported 00:19:41.950 Command Sets Supported 00:19:41.950 NVM Command Set: Supported 00:19:41.950 Boot Partition: Not Supported 00:19:41.950 Memory Page Size Minimum: 4096 bytes 00:19:41.950 Memory Page Size Maximum: 4096 bytes 00:19:41.950 Persistent Memory Region: Not Supported 00:19:41.950 Optional Asynchronous Events Supported 00:19:41.950 Namespace Attribute Notices: Supported 00:19:41.950 Firmware Activation Notices: Not Supported 00:19:41.950 ANA Change Notices: Supported 00:19:41.950 PLE Aggregate Log Change Notices: Not Supported 00:19:41.950 LBA Status Info Alert Notices: Not Supported 00:19:41.950 EGE Aggregate Log Change Notices: Not Supported 00:19:41.950 Normal NVM Subsystem Shutdown event: Not Supported 00:19:41.950 Zone Descriptor Change Notices: Not Supported 00:19:41.950 Discovery Log Change Notices: Not Supported 00:19:41.950 Controller Attributes 00:19:41.950 128-bit Host Identifier: Supported 00:19:41.950 Non-Operational Permissive Mode: Not Supported 00:19:41.950 NVM Sets: Not Supported 00:19:41.950 Read Recovery Levels: Not Supported 00:19:41.950 Endurance Groups: Not Supported 00:19:41.950 Predictable Latency Mode: Not Supported 00:19:41.950 Traffic Based Keep ALive: Supported 00:19:41.950 Namespace Granularity: Not Supported 00:19:41.950 SQ Associations: Not Supported 00:19:41.950 UUID List: Not Supported 00:19:41.950 Multi-Domain Subsystem: Not Supported 00:19:41.950 Fixed Capacity Management: Not Supported 00:19:41.950 Variable Capacity Management: Not Supported 00:19:41.950 Delete Endurance Group: Not Supported 00:19:41.950 Delete NVM Set: Not Supported 00:19:41.950 Extended LBA Formats Supported: Not Supported 00:19:41.950 Flexible Data Placement Supported: Not Supported 00:19:41.950 00:19:41.950 Controller Memory Buffer Support 00:19:41.950 ================================ 00:19:41.950 Supported: No 00:19:41.950 00:19:41.950 Persistent Memory Region Support 00:19:41.950 ================================ 00:19:41.950 Supported: No 00:19:41.950 00:19:41.950 Admin Command Set Attributes 00:19:41.950 ============================ 00:19:41.950 Security Send/Receive: Not Supported 00:19:41.950 Format NVM: Not Supported 00:19:41.950 Firmware Activate/Download: Not Supported 00:19:41.950 Namespace Management: Not Supported 00:19:41.950 Device Self-Test: Not Supported 00:19:41.950 Directives: Not Supported 00:19:41.950 NVMe-MI: Not Supported 00:19:41.950 Virtualization Management: Not Supported 00:19:41.950 Doorbell Buffer Config: Not Supported 00:19:41.950 Get LBA Status Capability: Not Supported 00:19:41.950 Command & Feature Lockdown Capability: Not Supported 00:19:41.950 Abort Command Limit: 4 00:19:41.950 Async Event Request Limit: 4 00:19:41.950 Number of Firmware Slots: N/A 00:19:41.950 Firmware Slot 1 Read-Only: N/A 00:19:41.950 Firmware Activation Without Reset: N/A 00:19:41.950 Multiple Update Detection Support: N/A 00:19:41.950 Firmware Update Granularity: No Information Provided 00:19:41.950 Per-Namespace SMART Log: Yes 00:19:41.950 Asymmetric Namespace Access Log Page: Supported 00:19:41.950 ANA Transition Time : 10 sec 00:19:41.950 00:19:41.950 Asymmetric Namespace Access Capabilities 00:19:41.950 ANA Optimized State : Supported 00:19:41.950 ANA Non-Optimized State : Supported 00:19:41.950 ANA Inaccessible State : Supported 00:19:41.950 ANA Persistent Loss State : Supported 00:19:41.950 ANA Change State : Supported 00:19:41.950 ANAGRPID is not changed : No 00:19:41.950 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:41.950 00:19:41.950 ANA Group Identifier Maximum : 128 00:19:41.950 Number of ANA Group Identifiers : 128 00:19:41.950 Max Number of Allowed Namespaces : 1024 00:19:41.950 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:41.950 Command Effects Log Page: Supported 00:19:41.950 Get Log Page Extended Data: Supported 00:19:41.950 Telemetry Log Pages: Not Supported 00:19:41.950 Persistent Event Log Pages: Not Supported 00:19:41.950 Supported Log Pages Log Page: May Support 00:19:41.950 Commands Supported & Effects Log Page: Not Supported 00:19:41.950 Feature Identifiers & Effects Log Page:May Support 00:19:41.950 NVMe-MI Commands & Effects Log Page: May Support 00:19:41.950 Data Area 4 for Telemetry Log: Not Supported 00:19:41.950 Error Log Page Entries Supported: 128 00:19:41.950 Keep Alive: Supported 00:19:41.950 Keep Alive Granularity: 1000 ms 00:19:41.950 00:19:41.950 NVM Command Set Attributes 00:19:41.950 ========================== 00:19:41.950 Submission Queue Entry Size 00:19:41.950 Max: 64 00:19:41.950 Min: 64 00:19:41.950 Completion Queue Entry Size 00:19:41.950 Max: 16 00:19:41.950 Min: 16 00:19:41.950 Number of Namespaces: 1024 00:19:41.950 Compare Command: Not Supported 00:19:41.950 Write Uncorrectable Command: Not Supported 00:19:41.950 Dataset Management Command: Supported 00:19:41.950 Write Zeroes Command: Supported 00:19:41.950 Set Features Save Field: Not Supported 00:19:41.950 Reservations: Not Supported 00:19:41.950 Timestamp: Not Supported 00:19:41.950 Copy: Not Supported 00:19:41.950 Volatile Write Cache: Present 00:19:41.950 Atomic Write Unit (Normal): 1 00:19:41.950 Atomic Write Unit (PFail): 1 00:19:41.950 Atomic Compare & Write Unit: 1 00:19:41.950 Fused Compare & Write: Not Supported 00:19:41.950 Scatter-Gather List 00:19:41.950 SGL Command Set: Supported 00:19:41.950 SGL Keyed: Not Supported 00:19:41.950 SGL Bit Bucket Descriptor: Not Supported 00:19:41.950 SGL Metadata Pointer: Not Supported 00:19:41.950 Oversized SGL: Not Supported 00:19:41.950 SGL Metadata Address: Not Supported 00:19:41.950 SGL Offset: Supported 00:19:41.951 Transport SGL Data Block: Not Supported 00:19:41.951 Replay Protected Memory Block: Not Supported 00:19:41.951 00:19:41.951 Firmware Slot Information 00:19:41.951 ========================= 00:19:41.951 Active slot: 0 00:19:41.951 00:19:41.951 Asymmetric Namespace Access 00:19:41.951 =========================== 00:19:41.951 Change Count : 0 00:19:41.951 Number of ANA Group Descriptors : 1 00:19:41.951 ANA Group Descriptor : 0 00:19:41.951 ANA Group ID : 1 00:19:41.951 Number of NSID Values : 1 00:19:41.951 Change Count : 0 00:19:41.951 ANA State : 1 00:19:41.951 Namespace Identifier : 1 00:19:41.951 00:19:41.951 Commands Supported and Effects 00:19:41.951 ============================== 00:19:41.951 Admin Commands 00:19:41.951 -------------- 00:19:41.951 Get Log Page (02h): Supported 00:19:41.951 Identify (06h): Supported 00:19:41.951 Abort (08h): Supported 00:19:41.951 Set Features (09h): Supported 00:19:41.951 Get Features (0Ah): Supported 00:19:41.951 Asynchronous Event Request (0Ch): Supported 00:19:41.951 Keep Alive (18h): Supported 00:19:41.951 I/O Commands 00:19:41.951 ------------ 00:19:41.951 Flush (00h): Supported 00:19:41.951 Write (01h): Supported LBA-Change 00:19:41.951 Read (02h): Supported 00:19:41.951 Write Zeroes (08h): Supported LBA-Change 00:19:41.951 Dataset Management (09h): Supported 00:19:41.951 00:19:41.951 Error Log 00:19:41.951 ========= 00:19:41.951 Entry: 0 00:19:41.951 Error Count: 0x3 00:19:41.951 Submission Queue Id: 0x0 00:19:41.951 Command Id: 0x5 00:19:41.951 Phase Bit: 0 00:19:41.951 Status Code: 0x2 00:19:41.951 Status Code Type: 0x0 00:19:41.951 Do Not Retry: 1 00:19:41.951 Error Location: 0x28 00:19:41.951 LBA: 0x0 00:19:41.951 Namespace: 0x0 00:19:41.951 Vendor Log Page: 0x0 00:19:41.951 ----------- 00:19:41.951 Entry: 1 00:19:41.951 Error Count: 0x2 00:19:41.951 Submission Queue Id: 0x0 00:19:41.951 Command Id: 0x5 00:19:41.951 Phase Bit: 0 00:19:41.951 Status Code: 0x2 00:19:41.951 Status Code Type: 0x0 00:19:41.951 Do Not Retry: 1 00:19:41.951 Error Location: 0x28 00:19:41.951 LBA: 0x0 00:19:41.951 Namespace: 0x0 00:19:41.951 Vendor Log Page: 0x0 00:19:41.951 ----------- 00:19:41.951 Entry: 2 00:19:41.951 Error Count: 0x1 00:19:41.951 Submission Queue Id: 0x0 00:19:41.951 Command Id: 0x4 00:19:41.951 Phase Bit: 0 00:19:41.951 Status Code: 0x2 00:19:41.951 Status Code Type: 0x0 00:19:41.951 Do Not Retry: 1 00:19:41.951 Error Location: 0x28 00:19:41.951 LBA: 0x0 00:19:41.951 Namespace: 0x0 00:19:41.951 Vendor Log Page: 0x0 00:19:41.951 00:19:41.951 Number of Queues 00:19:41.951 ================ 00:19:41.951 Number of I/O Submission Queues: 128 00:19:41.951 Number of I/O Completion Queues: 128 00:19:41.951 00:19:41.951 ZNS Specific Controller Data 00:19:41.951 ============================ 00:19:41.951 Zone Append Size Limit: 0 00:19:41.951 00:19:41.951 00:19:41.951 Active Namespaces 00:19:41.951 ================= 00:19:41.951 get_feature(0x05) failed 00:19:41.951 Namespace ID:1 00:19:41.951 Command Set Identifier: NVM (00h) 00:19:41.951 Deallocate: Supported 00:19:41.951 Deallocated/Unwritten Error: Not Supported 00:19:41.951 Deallocated Read Value: Unknown 00:19:41.951 Deallocate in Write Zeroes: Not Supported 00:19:41.951 Deallocated Guard Field: 0xFFFF 00:19:41.951 Flush: Supported 00:19:41.951 Reservation: Not Supported 00:19:41.951 Namespace Sharing Capabilities: Multiple Controllers 00:19:41.951 Size (in LBAs): 1310720 (5GiB) 00:19:41.951 Capacity (in LBAs): 1310720 (5GiB) 00:19:41.951 Utilization (in LBAs): 1310720 (5GiB) 00:19:41.951 UUID: 5f1f8362-99d3-4aec-9c00-f6deeeac8024 00:19:41.951 Thin Provisioning: Not Supported 00:19:41.951 Per-NS Atomic Units: Yes 00:19:41.951 Atomic Boundary Size (Normal): 0 00:19:41.951 Atomic Boundary Size (PFail): 0 00:19:41.951 Atomic Boundary Offset: 0 00:19:41.951 NGUID/EUI64 Never Reused: No 00:19:41.951 ANA group ID: 1 00:19:41.951 Namespace Write Protected: No 00:19:41.951 Number of LBA Formats: 1 00:19:41.951 Current LBA Format: LBA Format #00 00:19:41.951 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:41.951 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.951 rmmod nvme_tcp 00:19:41.951 rmmod nvme_fabrics 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:41.951 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:42.210 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:42.468 11:39:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:43.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:43.035 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:43.294 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:43.294 ************************************ 00:19:43.294 END TEST nvmf_identify_kernel_target 00:19:43.294 ************************************ 00:19:43.294 00:19:43.294 real 0m3.381s 00:19:43.294 user 0m1.209s 00:19:43.294 sys 0m1.467s 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.294 ************************************ 00:19:43.294 START TEST nvmf_auth_host 00:19:43.294 ************************************ 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:43.294 * Looking for test storage... 00:19:43.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:43.294 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:43.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.553 --rc genhtml_branch_coverage=1 00:19:43.553 --rc genhtml_function_coverage=1 00:19:43.553 --rc genhtml_legend=1 00:19:43.553 --rc geninfo_all_blocks=1 00:19:43.553 --rc geninfo_unexecuted_blocks=1 00:19:43.553 00:19:43.553 ' 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:43.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.553 --rc genhtml_branch_coverage=1 00:19:43.553 --rc genhtml_function_coverage=1 00:19:43.553 --rc genhtml_legend=1 00:19:43.553 --rc geninfo_all_blocks=1 00:19:43.553 --rc geninfo_unexecuted_blocks=1 00:19:43.553 00:19:43.553 ' 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:43.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.553 --rc genhtml_branch_coverage=1 00:19:43.553 --rc genhtml_function_coverage=1 00:19:43.553 --rc genhtml_legend=1 00:19:43.553 --rc geninfo_all_blocks=1 00:19:43.553 --rc geninfo_unexecuted_blocks=1 00:19:43.553 00:19:43.553 ' 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:43.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.553 --rc genhtml_branch_coverage=1 00:19:43.553 --rc genhtml_function_coverage=1 00:19:43.553 --rc genhtml_legend=1 00:19:43.553 --rc geninfo_all_blocks=1 00:19:43.553 --rc geninfo_unexecuted_blocks=1 00:19:43.553 00:19:43.553 ' 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.553 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.554 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:43.554 Cannot find device "nvmf_init_br" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:43.554 Cannot find device "nvmf_init_br2" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:43.554 Cannot find device "nvmf_tgt_br" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.554 Cannot find device "nvmf_tgt_br2" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:43.554 Cannot find device "nvmf_init_br" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:43.554 Cannot find device "nvmf_init_br2" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:43.554 Cannot find device "nvmf_tgt_br" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:43.554 Cannot find device "nvmf_tgt_br2" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:43.554 Cannot find device "nvmf_br" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:43.554 Cannot find device "nvmf_init_if" 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:43.554 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:43.813 Cannot find device "nvmf_init_if2" 00:19:43.813 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:43.813 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.813 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:43.813 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.814 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:43.814 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.814 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.814 11:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:43.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:43.814 00:19:43.814 --- 10.0.0.3 ping statistics --- 00:19:43.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.814 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:43.814 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:43.814 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:19:43.814 00:19:43.814 --- 10.0.0.4 ping statistics --- 00:19:43.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.814 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:43.814 00:19:43.814 --- 10.0.0.1 ping statistics --- 00:19:43.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.814 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:43.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:43.814 00:19:43.814 --- 10.0.0.2 ping statistics --- 00:19:43.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.814 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=94137 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 94137 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94137 ']' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.814 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1fb42435f03ee4e10fa9118520029b6b 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GUO 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1fb42435f03ee4e10fa9118520029b6b 0 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1fb42435f03ee4e10fa9118520029b6b 0 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1fb42435f03ee4e10fa9118520029b6b 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GUO 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GUO 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GUO 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a1f7ce5f01f1379974fac1ab333292de581578c9abf46c4174282a6d23bdedd0 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ri9 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a1f7ce5f01f1379974fac1ab333292de581578c9abf46c4174282a6d23bdedd0 3 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a1f7ce5f01f1379974fac1ab333292de581578c9abf46c4174282a6d23bdedd0 3 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a1f7ce5f01f1379974fac1ab333292de581578c9abf46c4174282a6d23bdedd0 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:44.382 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ri9 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ri9 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ri9 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b0d146a6b0e874a7d6c793d57b68b3cb9b766992bbb506a2 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4KA 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b0d146a6b0e874a7d6c793d57b68b3cb9b766992bbb506a2 0 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b0d146a6b0e874a7d6c793d57b68b3cb9b766992bbb506a2 0 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b0d146a6b0e874a7d6c793d57b68b3cb9b766992bbb506a2 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4KA 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4KA 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4KA 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c80e23de42f16ec01bcce81c123c95645609d24474d9d70c 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Oa7 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c80e23de42f16ec01bcce81c123c95645609d24474d9d70c 2 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c80e23de42f16ec01bcce81c123c95645609d24474d9d70c 2 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c80e23de42f16ec01bcce81c123c95645609d24474d9d70c 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Oa7 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Oa7 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Oa7 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f3099328e34f646c1f330ba2b2a17a1c 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wXQ 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f3099328e34f646c1f330ba2b2a17a1c 1 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f3099328e34f646c1f330ba2b2a17a1c 1 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f3099328e34f646c1f330ba2b2a17a1c 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:44.642 11:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wXQ 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wXQ 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wXQ 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79be9fc12c28c9bde44b4ba49e80311a 00:19:44.642 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.LgQ 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79be9fc12c28c9bde44b4ba49e80311a 1 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79be9fc12c28c9bde44b4ba49e80311a 1 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79be9fc12c28c9bde44b4ba49e80311a 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.LgQ 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.LgQ 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.LgQ 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=48c0f08c4d2e12cc9b5cf7ac5a690abf91ec58548528c9e7 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.TIu 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 48c0f08c4d2e12cc9b5cf7ac5a690abf91ec58548528c9e7 2 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 48c0f08c4d2e12cc9b5cf7ac5a690abf91ec58548528c9e7 2 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=48c0f08c4d2e12cc9b5cf7ac5a690abf91ec58548528c9e7 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.901 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.TIu 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.TIu 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.TIu 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac6bf49b2f39988deac52c4f91db8578 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UTV 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac6bf49b2f39988deac52c4f91db8578 0 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac6bf49b2f39988deac52c4f91db8578 0 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac6bf49b2f39988deac52c4f91db8578 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UTV 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UTV 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UTV 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e77e7bddb08a97a15d1027edc9555b62b70dd1f426a8f5efbe8336327a0558b2 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wIc 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e77e7bddb08a97a15d1027edc9555b62b70dd1f426a8f5efbe8336327a0558b2 3 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e77e7bddb08a97a15d1027edc9555b62b70dd1f426a8f5efbe8336327a0558b2 3 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e77e7bddb08a97a15d1027edc9555b62b70dd1f426a8f5efbe8336327a0558b2 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:44.902 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wIc 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wIc 00:19:45.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.wIc 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 94137 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94137 ']' 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.160 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GUO 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ri9 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ri9 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4KA 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Oa7 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Oa7 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wXQ 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.LgQ ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LgQ 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.TIu 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UTV ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UTV 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:45.419 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wIc 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:45.420 11:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:45.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.986 Waiting for block devices as requested 00:19:45.986 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:45.986 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:46.577 No valid GPT data, bailing 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:46.577 11:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:46.834 No valid GPT data, bailing 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:46.834 No valid GPT data, bailing 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:46.834 No valid GPT data, bailing 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:46.834 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:46.835 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -a 10.0.0.1 -t tcp -s 4420 00:19:47.093 00:19:47.093 Discovery Log Number of Records 2, Generation counter 2 00:19:47.093 =====Discovery Log Entry 0====== 00:19:47.093 trtype: tcp 00:19:47.093 adrfam: ipv4 00:19:47.093 subtype: current discovery subsystem 00:19:47.093 treq: not specified, sq flow control disable supported 00:19:47.093 portid: 1 00:19:47.093 trsvcid: 4420 00:19:47.093 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:47.093 traddr: 10.0.0.1 00:19:47.093 eflags: none 00:19:47.093 sectype: none 00:19:47.093 =====Discovery Log Entry 1====== 00:19:47.093 trtype: tcp 00:19:47.093 adrfam: ipv4 00:19:47.093 subtype: nvme subsystem 00:19:47.093 treq: not specified, sq flow control disable supported 00:19:47.093 portid: 1 00:19:47.093 trsvcid: 4420 00:19:47.093 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:47.093 traddr: 10.0.0.1 00:19:47.093 eflags: none 00:19:47.093 sectype: none 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:47.093 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.094 nvme0n1 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.094 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.351 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.352 nvme0n1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.352 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.610 nvme0n1 00:19:47.610 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.610 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.610 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.610 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.610 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.610 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.610 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.611 11:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.870 nvme0n1 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.870 nvme0n1 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.870 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.129 nvme0n1 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.129 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.130 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.697 nvme0n1 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.697 11:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:48.697 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.698 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.956 nvme0n1 00:19:48.956 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.957 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.216 nvme0n1 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.216 nvme0n1 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.216 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.217 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.217 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.217 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.476 nvme0n1 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.476 11:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.044 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.303 nvme0n1 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.303 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.562 nvme0n1 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.562 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.821 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.822 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.822 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.822 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.822 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.822 11:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.822 nvme0n1 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.822 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.081 nvme0n1 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.081 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.340 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.341 nvme0n1 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.341 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.600 11:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.505 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 nvme0n1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 11:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.073 nvme0n1 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:54.073 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.074 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.332 nvme0n1 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.332 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.591 11:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.850 nvme0n1 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.850 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 nvme0n1 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.417 11:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.986 nvme0n1 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.986 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.553 nvme0n1 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.553 11:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.119 nvme0n1 00:19:57.119 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.119 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.119 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.119 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.119 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.378 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.379 11:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.946 nvme0n1 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:57.946 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.947 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 nvme0n1 00:19:58.514 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.514 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.514 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.514 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.514 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.773 11:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.773 nvme0n1 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.773 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.774 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.033 nvme0n1 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.033 nvme0n1 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.033 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.292 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.292 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.292 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.292 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.292 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.292 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.293 nvme0n1 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.293 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.554 nvme0n1 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.554 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.814 nvme0n1 00:19:59.814 11:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.814 nvme0n1 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.814 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.073 nvme0n1 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.073 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.333 nvme0n1 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.333 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.591 nvme0n1 00:20:00.591 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.591 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.591 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.592 11:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.851 nvme0n1 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.851 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.110 nvme0n1 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.110 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.396 nvme0n1 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.396 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.656 nvme0n1 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.656 11:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.656 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.915 nvme0n1 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.915 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.916 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.486 nvme0n1 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.486 11:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.753 nvme0n1 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.753 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.754 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.013 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.272 nvme0n1 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.272 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.273 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.841 nvme0n1 00:20:03.841 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.841 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.841 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.841 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.841 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.841 11:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.841 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.100 nvme0n1 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.100 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.101 11:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.669 nvme0n1 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:04.928 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.929 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 nvme0n1 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.496 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.497 11:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 nvme0n1 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.432 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.433 11:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.000 nvme0n1 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.000 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.568 nvme0n1 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.568 11:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.826 nvme0n1 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.826 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 nvme0n1 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 nvme0n1 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 nvme0n1 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:08.342 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.343 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 nvme0n1 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 nvme0n1 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 11:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.601 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.860 nvme0n1 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.860 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:09.119 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.120 nvme0n1 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.120 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.379 nvme0n1 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.379 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.638 nvme0n1 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:09.638 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.639 11:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.897 nvme0n1 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.897 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.156 nvme0n1 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.156 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.445 nvme0n1 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.445 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.446 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.728 nvme0n1 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.728 11:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.728 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.987 nvme0n1 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.987 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.988 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.246 nvme0n1 00:20:11.246 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.246 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.246 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.246 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.246 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.246 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.505 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.506 11:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 nvme0n1 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.764 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.765 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.332 nvme0n1 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:12.332 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.333 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.591 nvme0n1 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.591 11:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.158 nvme0n1 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZiNDI0MzVmMDNlZTRlMTBmYTkxMTg1MjAwMjliNmLEmt52: 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTFmN2NlNWYwMWYxMzc5OTc0ZmFjMWFiMzMzMjkyZGU1ODE1NzhjOWFiZjQ2YzQxNzQyODJhNmQyM2JkZWRkMJlfMBg=: 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.158 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.725 nvme0n1 00:20:13.725 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.725 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.725 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.725 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.725 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.725 11:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:13.725 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.726 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.293 nvme0n1 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.294 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:14.552 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.553 11:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.119 nvme0n1 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDhjMGYwOGM0ZDJlMTJjYzliNWNmN2FjNWE2OTBhYmY5MWVjNTg1NDg1MjhjOWU3BKGvUA==: 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: ]] 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWM2YmY0OWIyZjM5OTg4ZGVhYzUyYzRmOTFkYjg1Nzg5HGPj: 00:20:15.119 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.120 11:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.686 nvme0n1 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.686 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc3ZTdiZGRiMDhhOTdhMTVkMTAyN2VkYzk1NTViNjJiNzBkZDFmNDI2YThmNWVmYmU4MzM2MzI3YTA1NThiMkfPAj4=: 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.687 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 nvme0n1 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:16.623 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 2024/12/16 11:40:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:16.624 request: 00:20:16.624 { 00:20:16.624 "method": "bdev_nvme_attach_controller", 00:20:16.624 "params": { 00:20:16.624 "name": "nvme0", 00:20:16.624 "trtype": "tcp", 00:20:16.624 "traddr": "10.0.0.1", 00:20:16.624 "adrfam": "ipv4", 00:20:16.624 "trsvcid": "4420", 00:20:16.624 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:16.624 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:16.624 "prchk_reftag": false, 00:20:16.624 "prchk_guard": false, 00:20:16.624 "hdgst": false, 00:20:16.624 "ddgst": false, 00:20:16.624 "allow_unrecognized_csi": false 00:20:16.624 } 00:20:16.624 } 00:20:16.624 Got JSON-RPC error response 00:20:16.624 GoRPCClient: error on JSON-RPC call 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 2024/12/16 11:40:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:16.624 request: 00:20:16.624 { 00:20:16.624 "method": "bdev_nvme_attach_controller", 00:20:16.624 "params": { 00:20:16.624 "name": "nvme0", 00:20:16.624 "trtype": "tcp", 00:20:16.624 "traddr": "10.0.0.1", 00:20:16.624 "adrfam": "ipv4", 00:20:16.624 "trsvcid": "4420", 00:20:16.624 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:16.624 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:16.624 "prchk_reftag": false, 00:20:16.624 "prchk_guard": false, 00:20:16.624 "hdgst": false, 00:20:16.624 "ddgst": false, 00:20:16.624 "dhchap_key": "key2", 00:20:16.624 "allow_unrecognized_csi": false 00:20:16.624 } 00:20:16.624 } 00:20:16.624 Got JSON-RPC error response 00:20:16.624 GoRPCClient: error on JSON-RPC call 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.624 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.625 2024/12/16 11:40:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:16.625 request: 00:20:16.625 { 00:20:16.625 "method": "bdev_nvme_attach_controller", 00:20:16.625 "params": { 00:20:16.625 "name": "nvme0", 00:20:16.625 "trtype": "tcp", 00:20:16.625 "traddr": "10.0.0.1", 00:20:16.625 "adrfam": "ipv4", 00:20:16.625 "trsvcid": "4420", 00:20:16.625 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:16.625 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:16.625 "prchk_reftag": false, 00:20:16.625 "prchk_guard": false, 00:20:16.625 "hdgst": false, 00:20:16.625 "ddgst": false, 00:20:16.625 "dhchap_key": "key1", 00:20:16.625 "dhchap_ctrlr_key": "ckey2", 00:20:16.625 "allow_unrecognized_csi": false 00:20:16.625 } 00:20:16.625 } 00:20:16.625 Got JSON-RPC error response 00:20:16.625 GoRPCClient: error on JSON-RPC call 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.625 11:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.883 nvme0n1 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.884 2024/12/16 11:40:22 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:16.884 request: 00:20:16.884 { 00:20:16.884 "method": "bdev_nvme_set_keys", 00:20:16.884 "params": { 00:20:16.884 "name": "nvme0", 00:20:16.884 "dhchap_key": "key1", 00:20:16.884 "dhchap_ctrlr_key": "ckey2" 00:20:16.884 } 00:20:16.884 } 00:20:16.884 Got JSON-RPC error response 00:20:16.884 GoRPCClient: error on JSON-RPC call 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:16.884 11:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkMTQ2YTZiMGU4NzRhN2Q2Yzc5M2Q1N2I2OGIzY2I5Yjc2Njk5MmJiYjUwNmEy20DtFg==: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: ]] 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzgwZTIzZGU0MmYxNmVjMDFiY2NlODFjMTIzYzk1NjQ1NjA5ZDI0NDc0ZDlkNzBjA+FzUA==: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.259 nvme0n1 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjMwOTkzMjhlMzRmNjQ2YzFmMzMwYmEyYjJhMTdhMWMAg0OL: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: ]] 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzliZTlmYzEyYzI4YzliZGU0NGI0YmE0OWU4MDMxMWHimm+Y: 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:18.259 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.260 2024/12/16 11:40:23 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:18.260 request: 00:20:18.260 { 00:20:18.260 "method": "bdev_nvme_set_keys", 00:20:18.260 "params": { 00:20:18.260 "name": "nvme0", 00:20:18.260 "dhchap_key": "key2", 00:20:18.260 "dhchap_ctrlr_key": "ckey1" 00:20:18.260 } 00:20:18.260 } 00:20:18.260 Got JSON-RPC error response 00:20:18.260 GoRPCClient: error on JSON-RPC call 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:18.260 11:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.194 rmmod nvme_tcp 00:20:19.194 rmmod nvme_fabrics 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 94137 ']' 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 94137 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 94137 ']' 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 94137 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.194 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94137 00:20:19.453 killing process with pid 94137 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94137' 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 94137 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 94137 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:19.453 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:19.711 11:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:19.711 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:19.968 11:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:20.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:20.533 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:20.791 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:20.791 11:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GUO /tmp/spdk.key-null.4KA /tmp/spdk.key-sha256.wXQ /tmp/spdk.key-sha384.TIu /tmp/spdk.key-sha512.wIc /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:20.791 11:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:21.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:21.050 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:21.050 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:21.050 00:20:21.050 real 0m37.837s 00:20:21.050 user 0m34.442s 00:20:21.050 sys 0m3.883s 00:20:21.050 11:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.050 11:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.050 ************************************ 00:20:21.050 END TEST nvmf_auth_host 00:20:21.050 ************************************ 00:20:21.308 11:40:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:21.308 11:40:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:21.308 11:40:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.308 11:40:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.308 11:40:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.308 ************************************ 00:20:21.308 START TEST nvmf_digest 00:20:21.308 ************************************ 00:20:21.308 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:21.308 * Looking for test storage... 00:20:21.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:21.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.309 --rc genhtml_branch_coverage=1 00:20:21.309 --rc genhtml_function_coverage=1 00:20:21.309 --rc genhtml_legend=1 00:20:21.309 --rc geninfo_all_blocks=1 00:20:21.309 --rc geninfo_unexecuted_blocks=1 00:20:21.309 00:20:21.309 ' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:21.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.309 --rc genhtml_branch_coverage=1 00:20:21.309 --rc genhtml_function_coverage=1 00:20:21.309 --rc genhtml_legend=1 00:20:21.309 --rc geninfo_all_blocks=1 00:20:21.309 --rc geninfo_unexecuted_blocks=1 00:20:21.309 00:20:21.309 ' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:21.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.309 --rc genhtml_branch_coverage=1 00:20:21.309 --rc genhtml_function_coverage=1 00:20:21.309 --rc genhtml_legend=1 00:20:21.309 --rc geninfo_all_blocks=1 00:20:21.309 --rc geninfo_unexecuted_blocks=1 00:20:21.309 00:20:21.309 ' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:21.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.309 --rc genhtml_branch_coverage=1 00:20:21.309 --rc genhtml_function_coverage=1 00:20:21.309 --rc genhtml_legend=1 00:20:21.309 --rc geninfo_all_blocks=1 00:20:21.309 --rc geninfo_unexecuted_blocks=1 00:20:21.309 00:20:21.309 ' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.309 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.309 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:21.568 Cannot find device "nvmf_init_br" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:21.568 Cannot find device "nvmf_init_br2" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:21.568 Cannot find device "nvmf_tgt_br" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.568 Cannot find device "nvmf_tgt_br2" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:21.568 Cannot find device "nvmf_init_br" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:21.568 Cannot find device "nvmf_init_br2" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:21.568 Cannot find device "nvmf_tgt_br" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:21.568 Cannot find device "nvmf_tgt_br2" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:21.568 Cannot find device "nvmf_br" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:21.568 Cannot find device "nvmf_init_if" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:21.568 Cannot find device "nvmf_init_if2" 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:21.568 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:21.569 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.827 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.827 11:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:21.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:20:21.827 00:20:21.827 --- 10.0.0.3 ping statistics --- 00:20:21.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.827 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:21.827 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:21.827 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:20:21.827 00:20:21.827 --- 10.0.0.4 ping statistics --- 00:20:21.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.827 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:21.827 00:20:21.827 --- 10.0.0.1 ping statistics --- 00:20:21.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.827 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:21.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:20:21.827 00:20:21.827 --- 10.0.0.2 ping statistics --- 00:20:21.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.827 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.827 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:21.828 ************************************ 00:20:21.828 START TEST nvmf_digest_clean 00:20:21.828 ************************************ 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=95807 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 95807 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95807 ']' 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.828 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:21.828 [2024-12-16 11:40:27.222918] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:21.828 [2024-12-16 11:40:27.223044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.085 [2024-12-16 11:40:27.373826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.085 [2024-12-16 11:40:27.437284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.085 [2024-12-16 11:40:27.437340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.085 [2024-12-16 11:40:27.437354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.085 [2024-12-16 11:40:27.437364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.085 [2024-12-16 11:40:27.437374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.085 [2024-12-16 11:40:27.437820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.085 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.085 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:22.085 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.085 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.085 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:22.343 null0 00:20:22.343 [2024-12-16 11:40:27.665361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.343 [2024-12-16 11:40:27.689506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95849 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95849 /var/tmp/bperf.sock 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95849 ']' 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:22.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.343 11:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:22.343 [2024-12-16 11:40:27.755166] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:22.343 [2024-12-16 11:40:27.755258] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95849 ] 00:20:22.601 [2024-12-16 11:40:27.907736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.601 [2024-12-16 11:40:27.970063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.601 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.601 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:22.601 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:22.859 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:22.859 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:23.117 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.117 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.376 nvme0n1 00:20:23.376 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:23.376 11:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:23.634 Running I/O for 2 seconds... 00:20:25.503 18277.00 IOPS, 71.39 MiB/s [2024-12-16T11:40:30.917Z] 18323.00 IOPS, 71.57 MiB/s 00:20:25.503 Latency(us) 00:20:25.503 [2024-12-16T11:40:30.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.503 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:25.503 nvme0n1 : 2.00 18344.05 71.66 0.00 0.00 6969.84 4021.53 21805.61 00:20:25.503 [2024-12-16T11:40:30.917Z] =================================================================================================================== 00:20:25.503 [2024-12-16T11:40:30.917Z] Total : 18344.05 71.66 0.00 0.00 6969.84 4021.53 21805.61 00:20:25.503 { 00:20:25.503 "results": [ 00:20:25.503 { 00:20:25.503 "job": "nvme0n1", 00:20:25.503 "core_mask": "0x2", 00:20:25.503 "workload": "randread", 00:20:25.503 "status": "finished", 00:20:25.503 "queue_depth": 128, 00:20:25.503 "io_size": 4096, 00:20:25.503 "runtime": 2.004683, 00:20:25.503 "iops": 18344.0474129825, 00:20:25.503 "mibps": 71.6564352069629, 00:20:25.503 "io_failed": 0, 00:20:25.503 "io_timeout": 0, 00:20:25.503 "avg_latency_us": 6969.839239383557, 00:20:25.503 "min_latency_us": 4021.5272727272727, 00:20:25.503 "max_latency_us": 21805.614545454544 00:20:25.503 } 00:20:25.503 ], 00:20:25.503 "core_count": 1 00:20:25.503 } 00:20:25.503 11:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:25.503 11:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:25.503 11:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:25.503 | select(.opcode=="crc32c") 00:20:25.503 | "\(.module_name) \(.executed)"' 00:20:25.503 11:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:25.503 11:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95849 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95849 ']' 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95849 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.761 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95849 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95849' 00:20:26.023 killing process with pid 95849 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95849 00:20:26.023 Received shutdown signal, test time was about 2.000000 seconds 00:20:26.023 00:20:26.023 Latency(us) 00:20:26.023 [2024-12-16T11:40:31.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.023 [2024-12-16T11:40:31.437Z] =================================================================================================================== 00:20:26.023 [2024-12-16T11:40:31.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95849 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95926 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95926 /var/tmp/bperf.sock 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95926 ']' 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.023 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:26.023 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:26.023 Zero copy mechanism will not be used. 00:20:26.023 [2024-12-16 11:40:31.433528] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:26.023 [2024-12-16 11:40:31.433635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95926 ] 00:20:26.281 [2024-12-16 11:40:31.577172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.281 [2024-12-16 11:40:31.632203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.539 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.539 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:26.539 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:26.539 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:26.539 11:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:26.797 11:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.797 11:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.362 nvme0n1 00:20:27.362 11:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:27.362 11:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:27.362 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:27.362 Zero copy mechanism will not be used. 00:20:27.362 Running I/O for 2 seconds... 00:20:29.270 7623.00 IOPS, 952.88 MiB/s [2024-12-16T11:40:34.684Z] 7678.00 IOPS, 959.75 MiB/s 00:20:29.270 Latency(us) 00:20:29.270 [2024-12-16T11:40:34.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.270 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:29.270 nvme0n1 : 2.00 7673.04 959.13 0.00 0.00 2081.10 629.29 4915.20 00:20:29.270 [2024-12-16T11:40:34.684Z] =================================================================================================================== 00:20:29.270 [2024-12-16T11:40:34.684Z] Total : 7673.04 959.13 0.00 0.00 2081.10 629.29 4915.20 00:20:29.270 { 00:20:29.270 "results": [ 00:20:29.270 { 00:20:29.270 "job": "nvme0n1", 00:20:29.270 "core_mask": "0x2", 00:20:29.270 "workload": "randread", 00:20:29.270 "status": "finished", 00:20:29.270 "queue_depth": 16, 00:20:29.270 "io_size": 131072, 00:20:29.270 "runtime": 2.003379, 00:20:29.270 "iops": 7673.036404993763, 00:20:29.270 "mibps": 959.1295506242204, 00:20:29.270 "io_failed": 0, 00:20:29.270 "io_timeout": 0, 00:20:29.270 "avg_latency_us": 2081.0951141390483, 00:20:29.270 "min_latency_us": 629.2945454545454, 00:20:29.270 "max_latency_us": 4915.2 00:20:29.270 } 00:20:29.270 ], 00:20:29.270 "core_count": 1 00:20:29.270 } 00:20:29.270 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:29.270 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:29.270 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:29.270 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:29.270 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:29.270 | select(.opcode=="crc32c") 00:20:29.270 | "\(.module_name) \(.executed)"' 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95926 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95926 ']' 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95926 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95926 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.838 killing process with pid 95926 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95926' 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95926 00:20:29.838 Received shutdown signal, test time was about 2.000000 seconds 00:20:29.838 00:20:29.838 Latency(us) 00:20:29.838 [2024-12-16T11:40:35.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.838 [2024-12-16T11:40:35.252Z] =================================================================================================================== 00:20:29.838 [2024-12-16T11:40:35.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.838 11:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95926 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95997 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95997 /var/tmp/bperf.sock 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95997 ']' 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.838 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.838 [2024-12-16 11:40:35.243756] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:29.838 [2024-12-16 11:40:35.243835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95997 ] 00:20:30.096 [2024-12-16 11:40:35.389198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.096 [2024-12-16 11:40:35.447625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.096 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.096 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:30.096 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:30.096 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:30.096 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:30.662 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.662 11:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.920 nvme0n1 00:20:30.920 11:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:30.920 11:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:30.920 Running I/O for 2 seconds... 00:20:33.229 22260.00 IOPS, 86.95 MiB/s [2024-12-16T11:40:38.643Z] 22218.00 IOPS, 86.79 MiB/s 00:20:33.229 Latency(us) 00:20:33.229 [2024-12-16T11:40:38.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.229 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:33.229 nvme0n1 : 2.00 22238.10 86.87 0.00 0.00 5749.50 2442.71 9651.67 00:20:33.229 [2024-12-16T11:40:38.643Z] =================================================================================================================== 00:20:33.229 [2024-12-16T11:40:38.643Z] Total : 22238.10 86.87 0.00 0.00 5749.50 2442.71 9651.67 00:20:33.229 { 00:20:33.229 "results": [ 00:20:33.229 { 00:20:33.229 "job": "nvme0n1", 00:20:33.229 "core_mask": "0x2", 00:20:33.229 "workload": "randwrite", 00:20:33.229 "status": "finished", 00:20:33.229 "queue_depth": 128, 00:20:33.229 "io_size": 4096, 00:20:33.229 "runtime": 2.003948, 00:20:33.229 "iops": 22238.101986678295, 00:20:33.229 "mibps": 86.86758588546209, 00:20:33.229 "io_failed": 0, 00:20:33.229 "io_timeout": 0, 00:20:33.229 "avg_latency_us": 5749.501934051945, 00:20:33.229 "min_latency_us": 2442.7054545454544, 00:20:33.229 "max_latency_us": 9651.665454545455 00:20:33.229 } 00:20:33.229 ], 00:20:33.229 "core_count": 1 00:20:33.229 } 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:33.229 | select(.opcode=="crc32c") 00:20:33.229 | "\(.module_name) \(.executed)"' 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95997 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95997 ']' 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95997 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:33.229 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.487 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95997 00:20:33.487 killing process with pid 95997 00:20:33.487 Received shutdown signal, test time was about 2.000000 seconds 00:20:33.487 00:20:33.487 Latency(us) 00:20:33.487 [2024-12-16T11:40:38.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.487 [2024-12-16T11:40:38.901Z] =================================================================================================================== 00:20:33.487 [2024-12-16T11:40:38.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95997' 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95997 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95997 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96075 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96075 /var/tmp/bperf.sock 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96075 ']' 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:33.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.488 11:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:33.746 [2024-12-16 11:40:38.918643] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:33.746 [2024-12-16 11:40:38.918949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96075 ] 00:20:33.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:33.746 Zero copy mechanism will not be used. 00:20:33.746 [2024-12-16 11:40:39.065823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.746 [2024-12-16 11:40:39.119360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.004 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.004 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:34.004 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:34.004 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:34.005 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:34.263 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.263 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.521 nvme0n1 00:20:34.521 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:34.521 11:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:34.778 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:34.778 Zero copy mechanism will not be used. 00:20:34.778 Running I/O for 2 seconds... 00:20:36.655 6602.00 IOPS, 825.25 MiB/s [2024-12-16T11:40:42.069Z] 6597.50 IOPS, 824.69 MiB/s 00:20:36.655 Latency(us) 00:20:36.655 [2024-12-16T11:40:42.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.655 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:36.655 nvme0n1 : 2.00 6594.70 824.34 0.00 0.00 2420.35 1757.56 5004.57 00:20:36.655 [2024-12-16T11:40:42.069Z] =================================================================================================================== 00:20:36.655 [2024-12-16T11:40:42.069Z] Total : 6594.70 824.34 0.00 0.00 2420.35 1757.56 5004.57 00:20:36.655 { 00:20:36.655 "results": [ 00:20:36.655 { 00:20:36.655 "job": "nvme0n1", 00:20:36.655 "core_mask": "0x2", 00:20:36.655 "workload": "randwrite", 00:20:36.655 "status": "finished", 00:20:36.655 "queue_depth": 16, 00:20:36.655 "io_size": 131072, 00:20:36.655 "runtime": 2.004034, 00:20:36.655 "iops": 6594.698493139338, 00:20:36.655 "mibps": 824.3373116424173, 00:20:36.655 "io_failed": 0, 00:20:36.655 "io_timeout": 0, 00:20:36.655 "avg_latency_us": 2420.34504072199, 00:20:36.655 "min_latency_us": 1757.5563636363636, 00:20:36.655 "max_latency_us": 5004.567272727273 00:20:36.655 } 00:20:36.655 ], 00:20:36.655 "core_count": 1 00:20:36.655 } 00:20:36.655 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:36.655 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:36.655 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:36.655 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:36.655 | select(.opcode=="crc32c") 00:20:36.655 | "\(.module_name) \(.executed)"' 00:20:36.655 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96075 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96075 ']' 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96075 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.914 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96075 00:20:37.172 killing process with pid 96075 00:20:37.172 Received shutdown signal, test time was about 2.000000 seconds 00:20:37.172 00:20:37.172 Latency(us) 00:20:37.172 [2024-12-16T11:40:42.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.172 [2024-12-16T11:40:42.586Z] =================================================================================================================== 00:20:37.172 [2024-12-16T11:40:42.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96075' 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96075 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96075 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95807 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95807 ']' 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95807 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95807 00:20:37.172 killing process with pid 95807 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95807' 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95807 00:20:37.172 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95807 00:20:37.430 00:20:37.430 real 0m15.632s 00:20:37.430 user 0m30.290s 00:20:37.430 sys 0m4.232s 00:20:37.430 ************************************ 00:20:37.430 END TEST nvmf_digest_clean 00:20:37.430 ************************************ 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:37.430 ************************************ 00:20:37.430 START TEST nvmf_digest_error 00:20:37.430 ************************************ 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.430 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=96175 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 96175 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96175 ']' 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.689 11:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.689 [2024-12-16 11:40:42.909589] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:37.689 [2024-12-16 11:40:42.909722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.689 [2024-12-16 11:40:43.058924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.948 [2024-12-16 11:40:43.121031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.948 [2024-12-16 11:40:43.121078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.948 [2024-12-16 11:40:43.121089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.948 [2024-12-16 11:40:43.121097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.948 [2024-12-16 11:40:43.121105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.948 [2024-12-16 11:40:43.121517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.514 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.514 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:38.514 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.514 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.514 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 [2024-12-16 11:40:43.962078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.772 11:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 null0 00:20:38.772 [2024-12-16 11:40:44.082479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.772 [2024-12-16 11:40:44.106597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96219 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96219 /var/tmp/bperf.sock 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96219 ']' 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.772 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 [2024-12-16 11:40:44.174704] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:38.772 [2024-12-16 11:40:44.174806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96219 ] 00:20:39.030 [2024-12-16 11:40:44.326756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.030 [2024-12-16 11:40:44.385904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.287 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.287 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:39.288 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.288 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.545 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:39.545 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.545 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.545 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.545 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.545 11:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.806 nvme0n1 00:20:39.806 11:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:39.806 11:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.806 11:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.806 11:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.806 11:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:39.806 11:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:40.065 Running I/O for 2 seconds... 00:20:40.065 [2024-12-16 11:40:45.261386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.261473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.261506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.273408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.273465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.273496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.286947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.287020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.287036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.300948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.301036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.301052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.313099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.313156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.313186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.326370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.326426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.326440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.341717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.341788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.341825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.357607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.357670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.357692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.373414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.373497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.373519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.389365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.389414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.389430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.404394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.404472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.404487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.418627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.418699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.418731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.432387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.432465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.432480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.446956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.447017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.447034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.461209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.461256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.461271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.065 [2024-12-16 11:40:45.475627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.065 [2024-12-16 11:40:45.475669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.065 [2024-12-16 11:40:45.475684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.489724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.489775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.489791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.504265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.504311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.504327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.518404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.518453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.518468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.533568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.533618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.533633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.547632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.547694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.547709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.561779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.561842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.561856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.575357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.575405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.575419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.589321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.589383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.589398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.603270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.603329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.603343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.617167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.617211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.617225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.323 [2024-12-16 11:40:45.630781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.323 [2024-12-16 11:40:45.630819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.323 [2024-12-16 11:40:45.630834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.324 [2024-12-16 11:40:45.644810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.324 [2024-12-16 11:40:45.644867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.324 [2024-12-16 11:40:45.644881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.324 [2024-12-16 11:40:45.658736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.324 [2024-12-16 11:40:45.658794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.324 [2024-12-16 11:40:45.658824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.324 [2024-12-16 11:40:45.672805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.324 [2024-12-16 11:40:45.672866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.324 [2024-12-16 11:40:45.672896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.324 [2024-12-16 11:40:45.687257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.324 [2024-12-16 11:40:45.687318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.324 [2024-12-16 11:40:45.687332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.324 [2024-12-16 11:40:45.698850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.324 [2024-12-16 11:40:45.698910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.324 [2024-12-16 11:40:45.698939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.324 [2024-12-16 11:40:45.712368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.324 [2024-12-16 11:40:45.712427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.324 [2024-12-16 11:40:45.712457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.324 [2024-12-16 11:40:45.725502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.324 [2024-12-16 11:40:45.725561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.324 [2024-12-16 11:40:45.725590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.739190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.739250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.739264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.752244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.752302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.752332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.766353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.766412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.766458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.779561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.779639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.779654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.792602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.792662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.792692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.805483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.805542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.805571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.818300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.818361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.818391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.831055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.831151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.831182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.844646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.844727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.844750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.859670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.859735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.859750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.871297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.871346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.871361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.885741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.885804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.885834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.582 [2024-12-16 11:40:45.898156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.582 [2024-12-16 11:40:45.898220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.582 [2024-12-16 11:40:45.898235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.583 [2024-12-16 11:40:45.914497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.583 [2024-12-16 11:40:45.914548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.583 [2024-12-16 11:40:45.914564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.583 [2024-12-16 11:40:45.929030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.583 [2024-12-16 11:40:45.929108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.583 [2024-12-16 11:40:45.929124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.583 [2024-12-16 11:40:45.942506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.583 [2024-12-16 11:40:45.942568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.583 [2024-12-16 11:40:45.942582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.583 [2024-12-16 11:40:45.956400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.583 [2024-12-16 11:40:45.956460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.583 [2024-12-16 11:40:45.956491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.583 [2024-12-16 11:40:45.970674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.583 [2024-12-16 11:40:45.970738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.583 [2024-12-16 11:40:45.970768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.583 [2024-12-16 11:40:45.984879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.583 [2024-12-16 11:40:45.984943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.583 [2024-12-16 11:40:45.984958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.841 [2024-12-16 11:40:46.000053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.841 [2024-12-16 11:40:46.000126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.841 [2024-12-16 11:40:46.000140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.841 [2024-12-16 11:40:46.014118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.841 [2024-12-16 11:40:46.014166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.841 [2024-12-16 11:40:46.014181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.841 [2024-12-16 11:40:46.029267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.841 [2024-12-16 11:40:46.029344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.841 [2024-12-16 11:40:46.029374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.841 [2024-12-16 11:40:46.041692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.841 [2024-12-16 11:40:46.041738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.841 [2024-12-16 11:40:46.041753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.841 [2024-12-16 11:40:46.055733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.841 [2024-12-16 11:40:46.055782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.841 [2024-12-16 11:40:46.055797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.841 [2024-12-16 11:40:46.069472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.069534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.069565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.084202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.084261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.084292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.098081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.098145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.098169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.114637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.114733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.114767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.129829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.129914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.129936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.143961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.144050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.144067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.158253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.158329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.158360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.173357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.173435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.173474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.188527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.188631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.188675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.203048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.203173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.203198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.217320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.217386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.217401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 [2024-12-16 11:40:46.232084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.232132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.232147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.842 18007.00 IOPS, 70.34 MiB/s [2024-12-16T11:40:46.256Z] [2024-12-16 11:40:46.245991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:40.842 [2024-12-16 11:40:46.246082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.842 [2024-12-16 11:40:46.246099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.100 [2024-12-16 11:40:46.262822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.100 [2024-12-16 11:40:46.262887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.100 [2024-12-16 11:40:46.262902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.100 [2024-12-16 11:40:46.275188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.100 [2024-12-16 11:40:46.275234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.275248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.290471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.290533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.290563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.303922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.303972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.303986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.317159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.317208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.317222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.333278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.333326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.333341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.349284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.349354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.349379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.365678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.365741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.365756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.380062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.380125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.380156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.394147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.394195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.394210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.408227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.408291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.408322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.422123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.422169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.422185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.438093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.438155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.438170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.452867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.452967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.452999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.466963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.467066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.467082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.480845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.480915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.480960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.495671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.495732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.495763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.101 [2024-12-16 11:40:46.509986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.101 [2024-12-16 11:40:46.510066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.101 [2024-12-16 11:40:46.510089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.525540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.525610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.525633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.541674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.541725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.541741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.555427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.555505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.555527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.573817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.573902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.573925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.588725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.588795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.588826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.604053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.604141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.604165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.618690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.618753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.618774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.635427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.635498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.635525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.649731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.649780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.649795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.663944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.663991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.664019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.680451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.680492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.680506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.691710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.691754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.691769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.706605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.706649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.706664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.720566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.720611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.720625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.734434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.734479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.734493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.748112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.748156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.748170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.360 [2024-12-16 11:40:46.762581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.360 [2024-12-16 11:40:46.762622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.360 [2024-12-16 11:40:46.762636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.619 [2024-12-16 11:40:46.775211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.619 [2024-12-16 11:40:46.775255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-12-16 11:40:46.775269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.619 [2024-12-16 11:40:46.789063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.619 [2024-12-16 11:40:46.789108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-12-16 11:40:46.789122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.619 [2024-12-16 11:40:46.803746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.619 [2024-12-16 11:40:46.803807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-12-16 11:40:46.803821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.619 [2024-12-16 11:40:46.818626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.619 [2024-12-16 11:40:46.818687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-12-16 11:40:46.818701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.619 [2024-12-16 11:40:46.832836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.832881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.832896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.846682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.846742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.846771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.860412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.860470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.860500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.874673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.874733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.874763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.888432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.888489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.888519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.900578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.900638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.900668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.916923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.916985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.917017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.928010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.928078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.928109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.941590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.941664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.941694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.955681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.955739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.955769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.968657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.968716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.968746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.982361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.982421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.982465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:46.996420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:46.996481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:46.996495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:47.010497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:47.010557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:47.010587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.620 [2024-12-16 11:40:47.024590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.620 [2024-12-16 11:40:47.024650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-12-16 11:40:47.024688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.039043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.039135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.039150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.053100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.053160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.053191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.067048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.067092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.067115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.080825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.080870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.080884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.094470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.094529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.094543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.109237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.109312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.109326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.123080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.123150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.123164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.136999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.137084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.137113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.150969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.151054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.151070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.163661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.163719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.163749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.177033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.888 [2024-12-16 11:40:47.177101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.888 [2024-12-16 11:40:47.177132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.888 [2024-12-16 11:40:47.190455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.889 [2024-12-16 11:40:47.190514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.889 [2024-12-16 11:40:47.190544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.889 [2024-12-16 11:40:47.203690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.889 [2024-12-16 11:40:47.203749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.889 [2024-12-16 11:40:47.203779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.889 [2024-12-16 11:40:47.217013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.889 [2024-12-16 11:40:47.217085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.889 [2024-12-16 11:40:47.217099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.889 [2024-12-16 11:40:47.230278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.889 [2024-12-16 11:40:47.230335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.889 [2024-12-16 11:40:47.230365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.889 17937.00 IOPS, 70.07 MiB/s [2024-12-16T11:40:47.303Z] [2024-12-16 11:40:47.244705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fbc050) 00:20:41.889 [2024-12-16 11:40:47.244748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.889 [2024-12-16 11:40:47.244779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.889 00:20:41.889 Latency(us) 00:20:41.889 [2024-12-16T11:40:47.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.889 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:41.889 nvme0n1 : 2.01 17954.83 70.14 0.00 0.00 7120.11 4230.05 20494.89 00:20:41.889 [2024-12-16T11:40:47.303Z] =================================================================================================================== 00:20:41.889 [2024-12-16T11:40:47.303Z] Total : 17954.83 70.14 0.00 0.00 7120.11 4230.05 20494.89 00:20:41.889 { 00:20:41.889 "results": [ 00:20:41.889 { 00:20:41.889 "job": "nvme0n1", 00:20:41.889 "core_mask": "0x2", 00:20:41.889 "workload": "randread", 00:20:41.889 "status": "finished", 00:20:41.889 "queue_depth": 128, 00:20:41.889 "io_size": 4096, 00:20:41.889 "runtime": 2.006368, 00:20:41.889 "iops": 17954.83181549945, 00:20:41.889 "mibps": 70.13606177929472, 00:20:41.889 "io_failed": 0, 00:20:41.889 "io_timeout": 0, 00:20:41.889 "avg_latency_us": 7120.11038136192, 00:20:41.889 "min_latency_us": 4230.050909090909, 00:20:41.889 "max_latency_us": 20494.894545454546 00:20:41.889 } 00:20:41.889 ], 00:20:41.889 "core_count": 1 00:20:41.889 } 00:20:41.889 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:41.889 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:41.889 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:41.889 | .driver_specific 00:20:41.889 | .nvme_error 00:20:41.889 | .status_code 00:20:41.889 | .command_transient_transport_error' 00:20:41.889 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:42.173 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:20:42.173 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96219 00:20:42.173 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96219 ']' 00:20:42.173 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96219 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96219 00:20:42.431 killing process with pid 96219 00:20:42.431 Received shutdown signal, test time was about 2.000000 seconds 00:20:42.431 00:20:42.431 Latency(us) 00:20:42.431 [2024-12-16T11:40:47.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.431 [2024-12-16T11:40:47.845Z] =================================================================================================================== 00:20:42.431 [2024-12-16T11:40:47.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96219' 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96219 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96219 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:42.431 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96296 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96296 /var/tmp/bperf.sock 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96296 ']' 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:42.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.432 11:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.690 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:42.690 Zero copy mechanism will not be used. 00:20:42.690 [2024-12-16 11:40:47.875033] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:42.690 [2024-12-16 11:40:47.875140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96296 ] 00:20:42.690 [2024-12-16 11:40:48.018387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.690 [2024-12-16 11:40:48.083487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.948 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.948 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:42.948 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:42.948 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.206 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:43.206 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.206 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.206 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.206 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.206 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.773 nvme0n1 00:20:43.773 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:43.773 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.773 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.773 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.773 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:43.773 11:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:43.773 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:43.773 Zero copy mechanism will not be used. 00:20:43.773 Running I/O for 2 seconds... 00:20:43.773 [2024-12-16 11:40:49.020804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.020863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.020879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.026033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.026233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.026375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.030658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.030850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.030989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.034263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.034465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.034608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.039817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.039981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.040001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.043431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.043594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.043612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.047675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.047837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.047855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.052672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.052717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.052732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.057307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.057351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.057365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.060867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.060909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.060923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.065271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.065315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.065330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.070165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.070209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.070224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-12-16 11:40:49.074834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.773 [2024-12-16 11:40:49.074879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-12-16 11:40:49.074893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.077650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.077690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.077703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.082634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.082678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.082693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.087190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.087232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.087247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.091704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.091748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.091762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.096647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.096692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.096707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.101858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.101903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.101917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.106661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.106717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.106731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.110916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.110957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.110971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.113867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.113908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.113922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.119202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.119245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.119259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.124538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.124580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.124610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.129775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.129972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.130132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.134979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.135170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.135201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.139765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.139809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.139824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.143396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.143482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.147822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.147860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.147874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.152435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.152476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.152506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.157500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.157543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.157573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.163243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.163295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.163309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.166657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.166698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.166712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.171186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.171223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.171237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.175509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.175715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.175733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.179450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.179519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.179548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.774 [2024-12-16 11:40:49.183706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:43.774 [2024-12-16 11:40:49.183744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-12-16 11:40:49.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.187689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.187730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.187745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.191761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.191800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.191828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.196089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.196161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.196176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.199869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.200135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.200153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.204830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.204993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.205029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.209725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.209767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.209796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.212907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.213131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.213149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.218302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.218340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.218368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.223588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.223628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.223656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.228167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.228205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.228217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.232838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.232876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.232919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.236765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.236801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.236829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.239870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.239911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.239926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.245178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.245216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.245230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.250045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.250107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.250123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.253560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.253597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.253624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.258012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.258060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.258074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.262895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.262932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.262960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.267232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.267272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.267285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.270202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.270238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.270265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.275302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.275342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.275355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.280100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.280135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.280162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.285134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.285339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.285357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.289027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.289231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.289249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.293441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.293480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.293509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.298528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.298566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.298594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.303498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.303537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.303566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.306534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.306571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.306599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.311494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.311532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.311560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.315738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.315807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.315821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.320751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.320962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.320986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.326029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.326080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.326108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.330408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.330446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.330474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.333128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.333164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.333177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.337944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.337983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.042 [2024-12-16 11:40:49.338012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.042 [2024-12-16 11:40:49.342772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.042 [2024-12-16 11:40:49.342809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.342837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.347804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.347859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.347903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.351489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.351527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.351556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.355304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.355345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.355359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.359373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.359428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.359456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.363552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.363591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.363619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.367290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.367331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.367344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.371739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.371778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.371806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.375964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.376002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.376040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.379533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.379570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.379598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.384624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.384664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.384692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.389661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.389716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.389745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.392797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.393060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.393080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.397628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.397668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.397697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.402646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.402687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.402715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.407694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.407736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.407765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.413080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.413125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.413137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.416431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.416468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.416495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.420898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.420947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.420990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.425670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.425707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.425734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.430559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.430777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.430906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.435823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.435862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.435891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.438786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.438974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.438991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.443085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.443148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.443161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.447184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.447224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.447238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.043 [2024-12-16 11:40:49.450309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.043 [2024-12-16 11:40:49.450344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.043 [2024-12-16 11:40:49.450372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.306 [2024-12-16 11:40:49.454455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.306 [2024-12-16 11:40:49.454492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.306 [2024-12-16 11:40:49.454520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.306 [2024-12-16 11:40:49.459201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.306 [2024-12-16 11:40:49.459241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.306 [2024-12-16 11:40:49.459254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.306 [2024-12-16 11:40:49.463959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.306 [2024-12-16 11:40:49.463996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.306 [2024-12-16 11:40:49.464035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.306 [2024-12-16 11:40:49.468380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.306 [2024-12-16 11:40:49.468417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.306 [2024-12-16 11:40:49.468445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.306 [2024-12-16 11:40:49.470973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.306 [2024-12-16 11:40:49.471032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.471057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.475642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.475681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.475710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.480160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.480199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.480227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.483073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.483169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.483184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.487850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.488085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.488103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.492841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.492881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.492925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.497706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.497747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.497761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.501312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.501349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.501377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.505678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.505721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.505735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.510976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.511056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.511071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.516357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.516411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.516439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.521467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.521506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.524567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.524604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.524650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.530104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.530151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.530165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.533701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.533744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.533759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.538092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.538136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.538148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.543336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.543378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.543392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.546480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.546518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.546548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.551272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.551314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.551328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.556533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.556572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.556602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.561773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.561814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.561843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.565027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.565232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.565250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.569433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.569471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.569498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.574726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.574766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.574796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.579629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.579669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.579699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.583168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.583208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.583222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.587509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.587548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.587575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.307 [2024-12-16 11:40:49.592683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.307 [2024-12-16 11:40:49.592725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.307 [2024-12-16 11:40:49.592755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.597302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.597341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.597369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.600607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.600645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.600673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.605386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.605426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.605455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.610381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.610420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.610449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.613841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.613884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.613899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.618753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.618793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.618821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.624056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.624271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.624294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.627932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.627973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.628002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.632571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.632611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.632640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.637737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.637777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.637806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.641365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.641404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.641433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.646092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.646130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.646159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.650770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.650808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.650837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.655290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.655344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.659563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.659600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.659645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.662710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.662748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.662777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.667483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.667520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.667548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.671687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.671724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.671753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.676064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.676107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.676120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.679301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.679343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.679357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.684061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.684121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.684135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.688799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.688837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.688865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.693836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.693875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.693919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.697383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.697594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.697611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.701981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.702191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.702209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.706870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.707085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.707290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.711815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.712088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.308 [2024-12-16 11:40:49.712209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.308 [2024-12-16 11:40:49.717254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.308 [2024-12-16 11:40:49.717297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.309 [2024-12-16 11:40:49.717311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.720850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.720906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.720920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.725426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.725463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.725477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.730658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.730697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.730727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.736188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.736228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.736243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.741175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.741217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.741231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.744036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.744220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.744237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.748599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.748637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.748666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.753451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.753505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.753534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.758381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.758422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.758452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.761786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.761841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.761870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.766179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.766221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.766234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.771043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.771093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.771148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.775831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.775870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.775914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.780107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.780147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.780176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.783244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.783293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.783306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.788482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.568 [2024-12-16 11:40:49.788525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.568 [2024-12-16 11:40:49.788555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.568 [2024-12-16 11:40:49.793782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.793839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.793868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.797764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.797966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.797984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.802348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.802390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.802419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.807246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.807290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.807304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.812082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.812124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.812152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.815954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.816034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.816049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.818721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.818917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.818936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.823472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.823513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.823542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.828444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.828486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.828516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.833269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.833316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.833330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.836679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.836723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.836736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.841854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.841898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.841912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.847072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.847151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.847165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.851747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.851790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.851805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.854991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.855075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.855089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.860427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.860466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.860495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.865400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.865456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.865484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.870081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.870120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.870148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.874296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.874336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.874365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.877380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.877436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.877450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.882047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.882100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.882129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.885477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.885517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.885546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.569 [2024-12-16 11:40:49.889763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.569 [2024-12-16 11:40:49.889965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.569 [2024-12-16 11:40:49.889983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.894644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.894685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.894714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.899655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.899696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.899725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.904517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.904558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.904586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.907423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.907492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.907521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.911890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.912112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.912131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.916189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.916230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.916259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.919731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.919770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.919798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.923876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.923916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.923945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.927628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.927668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.927697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.932788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.932829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.932857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.936432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.936472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.936500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.940265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.940306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.940335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.944432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.944641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.944659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.948551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.948591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.948619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.952450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.952492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.952520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.956838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.956880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.956908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.960240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.960280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.960309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.964449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.964490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.964517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.969366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.969409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.969437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.973585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.973641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.973670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.570 [2024-12-16 11:40:49.976389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.570 [2024-12-16 11:40:49.976427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.570 [2024-12-16 11:40:49.976470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:49.981302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:49.981341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:49.981369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:49.985397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:49.985436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:49.985464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:49.988934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:49.989116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:49.989135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:49.993764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:49.993926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:49.993945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:49.999371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:49.999415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:49.999429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.003769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.003811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.003825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.007195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.007238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.007253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.011611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.011651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.011679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.830 7002.00 IOPS, 875.25 MiB/s [2024-12-16T11:40:50.244Z] [2024-12-16 11:40:50.017210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.017246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.017276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.020924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.021145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.021164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.025840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.026037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.026055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.029708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.029746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.029774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.034254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.034441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.034459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.039505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.039545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.039573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.044434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.044489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.044517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.048022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.048085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.048099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.052409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.052450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.052494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.057101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.057141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.057170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.061618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.061658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.061686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.064717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.064756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.064785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.069593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.069649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.069678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.073941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.073981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.074009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.077178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.077217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.077245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.081730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.081767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.081795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.085906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.085946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.085975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.089151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.089189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.089217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.830 [2024-12-16 11:40:50.093432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.830 [2024-12-16 11:40:50.093471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.830 [2024-12-16 11:40:50.093500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.097928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.097967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.097995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.100807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.100845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.100873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.105641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.105680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.105708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.110401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.110470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.110498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.115317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.115359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.115373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.118750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.118942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.118960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.123208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.123250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.123274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.127710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.127750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.127778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.131901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.131941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.131969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.134591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.134643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.134671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.139078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.139309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.139527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.144384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.144425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.144453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.148788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.148831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.148845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.153681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.153724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.157901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.157945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.157959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.161673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.161715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.161729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.166445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.166485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.166528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.169823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.169863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.174319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.174361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.174389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.179297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.179339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.179360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.183987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.184065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.184095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.187207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.187248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.187262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.191975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.192067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.192080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.196842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.196883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.196912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.201892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.201931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.201958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.205277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.205471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.205490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.831 [2024-12-16 11:40:50.209872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.831 [2024-12-16 11:40:50.210100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.831 [2024-12-16 11:40:50.210119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.832 [2024-12-16 11:40:50.214424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.832 [2024-12-16 11:40:50.214464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.832 [2024-12-16 11:40:50.214492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.832 [2024-12-16 11:40:50.218724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.832 [2024-12-16 11:40:50.218764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.832 [2024-12-16 11:40:50.218792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.832 [2024-12-16 11:40:50.221936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.832 [2024-12-16 11:40:50.222150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.832 [2024-12-16 11:40:50.222168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.832 [2024-12-16 11:40:50.226806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.832 [2024-12-16 11:40:50.227026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.832 [2024-12-16 11:40:50.227180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.832 [2024-12-16 11:40:50.230522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.832 [2024-12-16 11:40:50.230715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.832 [2024-12-16 11:40:50.230867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.832 [2024-12-16 11:40:50.235048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.832 [2024-12-16 11:40:50.235246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.832 [2024-12-16 11:40:50.235264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.832 [2024-12-16 11:40:50.239300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:44.832 [2024-12-16 11:40:50.239344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.832 [2024-12-16 11:40:50.239359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.243213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.243255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.243284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.248009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.248074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.248088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.252454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.252509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.252537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.256286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.256326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.256354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.261217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.261259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.261288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.264866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.264910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.264924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.269515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.269555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.269583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.274859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.274903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.274917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.279896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.279968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.279996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.282859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.282899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.282927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.287978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.288044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.288058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.293026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.091 [2024-12-16 11:40:50.293099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.091 [2024-12-16 11:40:50.293128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.091 [2024-12-16 11:40:50.297707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.297749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.297777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.301259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.301298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.301326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.305561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.305607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.305621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.310331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.310401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.313860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.313916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.313946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.318316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.318355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.318383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.323071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.323143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.323157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.327785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.327825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.327854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.330768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.330806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.330834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.335728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.335767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.340598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.340800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.340818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.345541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.345582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.345611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.348971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.349034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.349063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.353135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.353175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.353203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.358048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.358121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.358151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.361962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.362041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.362056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.365840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.365882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.365911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.369554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.369594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.374579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.374637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.374667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.379240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.379283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.379297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.381921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.381958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.381987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.386959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.387000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.387056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.390178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.390219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.390232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.394675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.394717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.394747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.399332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.399375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.399389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.402595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.402635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.402649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.407088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.407134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.407148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.411361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.092 [2024-12-16 11:40:50.411405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.092 [2024-12-16 11:40:50.411419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.092 [2024-12-16 11:40:50.414867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.415062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.415080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.419429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.419648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.419666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.424507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.424551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.424581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.429181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.429223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.429237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.432626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.432665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.432694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.437133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.437173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.437188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.442262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.442318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.442348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.446775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.446817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.446831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.450028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.450081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.450095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.454493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.454533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.454563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.457964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.458159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.458177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.462147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.462187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.462217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.466944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.466986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.467016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.471872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.471913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.471943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.475428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.475497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.475525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.479736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.479777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.479807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.484623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.484664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.484693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.489454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.489495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.489525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.492503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.492543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.492573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.496996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.497084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.497115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.093 [2024-12-16 11:40:50.502124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.093 [2024-12-16 11:40:50.502164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.093 [2024-12-16 11:40:50.502194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.505697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.505740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.505754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.509767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.509807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.509837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.514706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.514748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.514778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.519649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.519689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.519719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.522477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.522514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.522543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.527567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.527608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.527638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.531096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.531161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.531175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.535223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.535265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.535296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.539774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.539815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.353 [2024-12-16 11:40:50.543448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.353 [2024-12-16 11:40:50.543503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.353 [2024-12-16 11:40:50.543533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.548132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.548187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.548217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.553218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.553258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.553288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.557631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.557671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.557700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.560479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.560702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.560810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.565907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.565950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.565995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.570534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.570574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.570603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.573505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.573545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.573560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.578235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.578398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.578418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.581848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.581885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.581900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.585826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.585869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.585883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.589828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.589871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.589886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.594228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.594269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.594283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.597639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.597804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.597822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.601719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.601871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.601889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.606046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.606088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.606101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.609313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.609355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.609369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.613492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.613535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.613549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.618185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.618227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.618241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.622727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.622892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.622911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.627057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.627095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.627119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.630643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.630684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.630698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.634712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.634753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.634767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.639422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.639462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.639476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.642806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.642974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.642992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.647855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.647898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.647912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.652914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.652968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.652983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.658255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.658304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.354 [2024-12-16 11:40:50.658318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.354 [2024-12-16 11:40:50.661809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.354 [2024-12-16 11:40:50.661852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.661866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.666380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.666422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.666437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.671722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.671944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.671962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.675604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.675764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.675783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.680110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.680153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.680167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.685180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.685218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.685233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.690216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.690257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.690288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.693442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.693484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.693498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.697436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.697479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.697493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.702538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.702581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.702595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.707776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.707819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.707834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.711591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.711765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.711783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.716510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.716677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.716697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.721965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.722019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.722034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.726166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.726206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.726220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.729664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.729703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.729733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.734422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.734463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.734478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.737710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.737888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.737907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.741945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.742138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.742156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.746252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.746294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.746308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.750441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.750483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.750497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.754673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.754714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.754728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.758109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.758150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.758163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.355 [2024-12-16 11:40:50.762516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.355 [2024-12-16 11:40:50.762555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.355 [2024-12-16 11:40:50.762601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.766451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.766491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.766521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.770516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.770556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.770571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.774601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.774774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.774792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.778501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.778553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.778578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.782785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.782826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.782855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.788005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.788060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.788075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.792729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.792905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.792923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.795905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.795947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.795962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.800782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.800822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.800851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.806130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.806313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.806331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.809834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.809878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.809892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.813667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.813709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.813723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.818776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.818821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.818836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.823078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.615 [2024-12-16 11:40:50.823154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.615 [2024-12-16 11:40:50.823168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.615 [2024-12-16 11:40:50.827700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.827742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.827755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.832627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.832669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.832683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.836732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.836894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.836912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.840397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.840439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.840453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.845491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.845531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.845562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.850712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.850784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.855711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.855750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.855780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.858890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.858929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.858959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.864040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.864093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.864124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.869171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.869214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.869228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.874651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.874714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.874744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.878287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.878328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.878341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.882639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.882683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.882698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.887781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.887964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.888085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.892882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.892926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.892941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.895997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.896193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.896211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.901421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.901462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.901475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.906864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.906908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.906923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.912239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.912400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.912419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.916108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.916149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.916162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.920446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.920606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.920624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.925647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.925690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.925704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.929131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.929172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.929185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.933569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.933744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.933762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.938274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.938316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.938330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.943538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.943580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.943594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.947095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.947144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.947158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.616 [2024-12-16 11:40:50.951359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.616 [2024-12-16 11:40:50.951525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.616 [2024-12-16 11:40:50.951543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.956570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.956611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.956642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.961434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.961476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.961490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.966375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.966415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.966444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.969188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.969227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.969241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.974411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.974453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.974482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.979475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.979532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.979562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.982576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.982628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.982657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.987810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.987852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.987881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.992368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.992408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.992437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:50.997399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:50.997448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:50.997462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:51.002316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:51.002358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:51.002373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:51.005232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:51.005273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:51.005286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.617 [2024-12-16 11:40:51.010158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:51.010201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:51.010215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.617 7065.00 IOPS, 883.12 MiB/s [2024-12-16T11:40:51.031Z] [2024-12-16 11:40:51.017068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc41e50) 00:20:45.617 [2024-12-16 11:40:51.017115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.617 [2024-12-16 11:40:51.017130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.617 00:20:45.617 Latency(us) 00:20:45.617 [2024-12-16T11:40:51.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.617 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:45.617 nvme0n1 : 2.00 7063.23 882.90 0.00 0.00 2261.03 584.61 9234.62 00:20:45.617 [2024-12-16T11:40:51.031Z] =================================================================================================================== 00:20:45.617 [2024-12-16T11:40:51.031Z] Total : 7063.23 882.90 0.00 0.00 2261.03 584.61 9234.62 00:20:45.617 { 00:20:45.617 "results": [ 00:20:45.617 { 00:20:45.617 "job": "nvme0n1", 00:20:45.617 "core_mask": "0x2", 00:20:45.617 "workload": "randread", 00:20:45.617 "status": "finished", 00:20:45.617 "queue_depth": 16, 00:20:45.617 "io_size": 131072, 00:20:45.617 "runtime": 2.002767, 00:20:45.617 "iops": 7063.228024028756, 00:20:45.617 "mibps": 882.9035030035946, 00:20:45.617 "io_failed": 0, 00:20:45.617 "io_timeout": 0, 00:20:45.617 "avg_latency_us": 2261.0339323676467, 00:20:45.617 "min_latency_us": 584.6109090909091, 00:20:45.617 "max_latency_us": 9234.618181818181 00:20:45.617 } 00:20:45.617 ], 00:20:45.617 "core_count": 1 00:20:45.617 } 00:20:45.875 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:45.875 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:45.875 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:45.875 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:45.875 | .driver_specific 00:20:45.875 | .nvme_error 00:20:45.875 | .status_code 00:20:45.875 | .command_transient_transport_error' 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 457 > 0 )) 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96296 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96296 ']' 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96296 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96296 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.132 killing process with pid 96296 00:20:46.132 Received shutdown signal, test time was about 2.000000 seconds 00:20:46.132 00:20:46.132 Latency(us) 00:20:46.132 [2024-12-16T11:40:51.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.132 [2024-12-16T11:40:51.546Z] =================================================================================================================== 00:20:46.132 [2024-12-16T11:40:51.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96296' 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96296 00:20:46.132 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96296 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96371 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96371 /var/tmp/bperf.sock 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96371 ']' 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:46.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.389 11:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.389 [2024-12-16 11:40:51.705881] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:46.389 [2024-12-16 11:40:51.706188] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96371 ] 00:20:46.647 [2024-12-16 11:40:51.849429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.647 [2024-12-16 11:40:51.895205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.581 11:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.581 11:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:47.581 11:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.581 11:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.840 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.840 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.840 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.840 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.840 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.840 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:48.098 nvme0n1 00:20:48.098 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:48.098 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.098 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:48.098 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.098 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:48.098 11:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:48.098 Running I/O for 2 seconds... 00:20:48.356 [2024-12-16 11:40:53.524600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef3a28 00:20:48.356 [2024-12-16 11:40:53.525824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.356 [2024-12-16 11:40:53.525883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.356 [2024-12-16 11:40:53.538757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee23b8 00:20:48.356 [2024-12-16 11:40:53.540602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.356 [2024-12-16 11:40:53.540671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:48.356 [2024-12-16 11:40:53.547045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee2c28 00:20:48.356 [2024-12-16 11:40:53.548074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.356 [2024-12-16 11:40:53.548132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:48.356 [2024-12-16 11:40:53.558334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef7100 00:20:48.356 [2024-12-16 11:40:53.559329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.356 [2024-12-16 11:40:53.559384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:48.356 [2024-12-16 11:40:53.569089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef3a28 00:20:48.356 [2024-12-16 11:40:53.569908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.356 [2024-12-16 11:40:53.569958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:48.356 [2024-12-16 11:40:53.582057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ede8a8 00:20:48.356 [2024-12-16 11:40:53.583056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.356 [2024-12-16 11:40:53.583154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:48.356 [2024-12-16 11:40:53.592695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeaab8 00:20:48.356 [2024-12-16 11:40:53.593454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.356 [2024-12-16 11:40:53.593488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.603198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee99d8 00:20:48.357 [2024-12-16 11:40:53.603918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.603969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.613252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efef90 00:20:48.357 [2024-12-16 11:40:53.614056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.614131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.627093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee12d8 00:20:48.357 [2024-12-16 11:40:53.628555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.628604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.637744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef0788 00:20:48.357 [2024-12-16 11:40:53.638969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.639062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.648619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef8e88 00:20:48.357 [2024-12-16 11:40:53.649830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.661684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016edfdc0 00:20:48.357 [2024-12-16 11:40:53.663513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.663566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.669643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efc998 00:20:48.357 [2024-12-16 11:40:53.670571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.670619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.683400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeaef0 00:20:48.357 [2024-12-16 11:40:53.684868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.684916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.693888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef2948 00:20:48.357 [2024-12-16 11:40:53.695183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.695238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.704967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeaef0 00:20:48.357 [2024-12-16 11:40:53.706171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.706220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.717046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eff3c8 00:20:48.357 [2024-12-16 11:40:53.717811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.717845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.728281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efeb58 00:20:48.357 [2024-12-16 11:40:53.728968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.729027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.741707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeaab8 00:20:48.357 [2024-12-16 11:40:53.743242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.743298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.752356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee0a68 00:20:48.357 [2024-12-16 11:40:53.753745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.753828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:48.357 [2024-12-16 11:40:53.763426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efe720 00:20:48.357 [2024-12-16 11:40:53.764732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.357 [2024-12-16 11:40:53.764781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.774827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef4b08 00:20:48.616 [2024-12-16 11:40:53.776206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.776242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.787776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eebb98 00:20:48.616 [2024-12-16 11:40:53.789603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.789650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.796094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6b70 00:20:48.616 [2024-12-16 11:40:53.797085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.797152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.808469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efd640 00:20:48.616 [2024-12-16 11:40:53.809437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.809486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.820188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef4f40 00:20:48.616 [2024-12-16 11:40:53.821064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.821109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.834661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef92c0 00:20:48.616 [2024-12-16 11:40:53.836290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.836374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.845371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eed0b0 00:20:48.616 [2024-12-16 11:40:53.846673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.846723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.856228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef1ca0 00:20:48.616 [2024-12-16 11:40:53.857689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.857741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.867767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeb328 00:20:48.616 [2024-12-16 11:40:53.869165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.869200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.879160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6fa8 00:20:48.616 [2024-12-16 11:40:53.880325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.880362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.893211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eef270 00:20:48.616 [2024-12-16 11:40:53.895053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.895137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.901932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef31b8 00:20:48.616 [2024-12-16 11:40:53.902842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.902892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.916196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee23b8 00:20:48.616 [2024-12-16 11:40:53.917679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.917729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.927477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ede038 00:20:48.616 [2024-12-16 11:40:53.928688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.928725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.939349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef96f8 00:20:48.616 [2024-12-16 11:40:53.940530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.940580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.953871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee27f0 00:20:48.616 [2024-12-16 11:40:53.955843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.955897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.962548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee5ec8 00:20:48.616 [2024-12-16 11:40:53.963514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.963567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.976499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eef6a8 00:20:48.616 [2024-12-16 11:40:53.977967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.978054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:53.986492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ede038 00:20:48.616 [2024-12-16 11:40:53.987291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:53.987331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:54.001319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee7818 00:20:48.616 [2024-12-16 11:40:54.003139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:54.003176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:54.012785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee1b48 00:20:48.616 [2024-12-16 11:40:54.014404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:54.014452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:48.616 [2024-12-16 11:40:54.024128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6fa8 00:20:48.616 [2024-12-16 11:40:54.025545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.616 [2024-12-16 11:40:54.025581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.035538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eee5c8 00:20:48.875 [2024-12-16 11:40:54.036940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.037004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.046716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee5ec8 00:20:48.875 [2024-12-16 11:40:54.047813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.047855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.058282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efc560 00:20:48.875 [2024-12-16 11:40:54.059435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.059504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.072626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eef270 00:20:48.875 [2024-12-16 11:40:54.074508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.074557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.081173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efe2e8 00:20:48.875 [2024-12-16 11:40:54.081990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.082065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.095575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef5378 00:20:48.875 [2024-12-16 11:40:54.097058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.097094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.106521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016edf118 00:20:48.875 [2024-12-16 11:40:54.107761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.107801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.118616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef4b08 00:20:48.875 [2024-12-16 11:40:54.119819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.119858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.133069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efef90 00:20:48.875 [2024-12-16 11:40:54.134893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.134942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.141374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef92c0 00:20:48.875 [2024-12-16 11:40:54.142318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.142367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.155858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee0630 00:20:48.875 [2024-12-16 11:40:54.157408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.157471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.167126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6fa8 00:20:48.875 [2024-12-16 11:40:54.168418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.168469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.178701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeee38 00:20:48.875 [2024-12-16 11:40:54.180059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.875 [2024-12-16 11:40:54.180122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:48.875 [2024-12-16 11:40:54.193091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016edf988 00:20:48.876 [2024-12-16 11:40:54.195005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.195072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.201274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef7da8 00:20:48.876 [2024-12-16 11:40:54.202304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.202353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.215721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee4140 00:20:48.876 [2024-12-16 11:40:54.217435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.217484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.226992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eed4e8 00:20:48.876 [2024-12-16 11:40:54.228367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.228403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.238472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee5220 00:20:48.876 [2024-12-16 11:40:54.239843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.239882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.249504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef0788 00:20:48.876 [2024-12-16 11:40:54.250572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.250622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.261175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef6890 00:20:48.876 [2024-12-16 11:40:54.262299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.262348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.275063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee5ec8 00:20:48.876 [2024-12-16 11:40:54.276766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.276815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:48.876 [2024-12-16 11:40:54.283642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee99d8 00:20:48.876 [2024-12-16 11:40:54.284395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.876 [2024-12-16 11:40:54.284428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.297939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee7c50 00:20:49.134 [2024-12-16 11:40:54.299392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.134 [2024-12-16 11:40:54.299432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.309267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efe2e8 00:20:49.134 [2024-12-16 11:40:54.310512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.134 [2024-12-16 11:40:54.310562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.320390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efda78 00:20:49.134 [2024-12-16 11:40:54.321586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.134 [2024-12-16 11:40:54.321634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.333714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016edf118 00:20:49.134 [2024-12-16 11:40:54.335735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.134 [2024-12-16 11:40:54.335772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.342738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee3060 00:20:49.134 [2024-12-16 11:40:54.343610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.134 [2024-12-16 11:40:54.343661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.357313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efa3a0 00:20:49.134 [2024-12-16 11:40:54.358864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.134 [2024-12-16 11:40:54.358900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.368452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef92c0 00:20:49.134 [2024-12-16 11:40:54.369728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.134 [2024-12-16 11:40:54.369762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:49.134 [2024-12-16 11:40:54.379298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efac10 00:20:49.134 [2024-12-16 11:40:54.380518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.380565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.390692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef1430 00:20:49.135 [2024-12-16 11:40:54.391437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.391476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.402001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eef6a8 00:20:49.135 [2024-12-16 11:40:54.402731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.402769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.415298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef1430 00:20:49.135 [2024-12-16 11:40:54.416702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.416750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.424184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef92c0 00:20:49.135 [2024-12-16 11:40:54.425008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.425063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.437635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eed4e8 00:20:49.135 [2024-12-16 11:40:54.439347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.439384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.448158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6738 00:20:49.135 [2024-12-16 11:40:54.449852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.449900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.456114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016edf988 00:20:49.135 [2024-12-16 11:40:54.456960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.457030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.468974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee2c28 00:20:49.135 [2024-12-16 11:40:54.470441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.470489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.479040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee9168 00:20:49.135 [2024-12-16 11:40:54.480246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.480296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.489474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee3060 00:20:49.135 [2024-12-16 11:40:54.490645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.490692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.502343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efd208 00:20:49.135 [2024-12-16 11:40:54.504165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.504219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.135 21807.00 IOPS, 85.18 MiB/s [2024-12-16T11:40:54.549Z] [2024-12-16 11:40:54.511934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef2d80 00:20:49.135 [2024-12-16 11:40:54.512898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.512962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.522329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee9168 00:20:49.135 [2024-12-16 11:40:54.523078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.523174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.533292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efe720 00:20:49.135 [2024-12-16 11:40:54.534363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.534412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:49.135 [2024-12-16 11:40:54.543858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee3060 00:20:49.135 [2024-12-16 11:40:54.544772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.135 [2024-12-16 11:40:54.544818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.558290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee84c0 00:20:49.394 [2024-12-16 11:40:54.560184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.560237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.567616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efd640 00:20:49.394 [2024-12-16 11:40:54.568566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.568615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.578748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef96f8 00:20:49.394 [2024-12-16 11:40:54.580192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.580243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.588983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6300 00:20:49.394 [2024-12-16 11:40:54.590101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.590159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.599761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeee38 00:20:49.394 [2024-12-16 11:40:54.600844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.600893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.613204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eec408 00:20:49.394 [2024-12-16 11:40:54.614917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.614968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.621351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef46d0 00:20:49.394 [2024-12-16 11:40:54.622179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.622242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.634739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efe720 00:20:49.394 [2024-12-16 11:40:54.636254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.636306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.645156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eefae0 00:20:49.394 [2024-12-16 11:40:54.646339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.646388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.655882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef8e88 00:20:49.394 [2024-12-16 11:40:54.657056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.657132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.667329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeea00 00:20:49.394 [2024-12-16 11:40:54.668497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.668545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.678570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee49b0 00:20:49.394 [2024-12-16 11:40:54.679448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.679515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.689882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efc128 00:20:49.394 [2024-12-16 11:40:54.691047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.691158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.703543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efda78 00:20:49.394 [2024-12-16 11:40:54.705328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.705377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.711770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efc998 00:20:49.394 [2024-12-16 11:40:54.712693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.712740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.725386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef6020 00:20:49.394 [2024-12-16 11:40:54.726931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.726980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.736128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ede470 00:20:49.394 [2024-12-16 11:40:54.737524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.737574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.747587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef9b30 00:20:49.394 [2024-12-16 11:40:54.748827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.748876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.761383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef6890 00:20:49.394 [2024-12-16 11:40:54.763288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.763327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.770613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef4298 00:20:49.394 [2024-12-16 11:40:54.771554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.771591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.783959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef35f0 00:20:49.394 [2024-12-16 11:40:54.785551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.785598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.794082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6fa8 00:20:49.394 [2024-12-16 11:40:54.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.394 [2024-12-16 11:40:54.796144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:49.394 [2024-12-16 11:40:54.806200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efbcf0 00:20:49.653 [2024-12-16 11:40:54.807308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.653 [2024-12-16 11:40:54.807347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:49.653 [2024-12-16 11:40:54.816381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee23b8 00:20:49.653 [2024-12-16 11:40:54.817651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.653 [2024-12-16 11:40:54.817699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.653 [2024-12-16 11:40:54.828219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016edf550 00:20:49.653 [2024-12-16 11:40:54.829359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.653 [2024-12-16 11:40:54.829394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.653 [2024-12-16 11:40:54.842792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efeb58 00:20:49.653 [2024-12-16 11:40:54.844732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.653 [2024-12-16 11:40:54.844780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:49.653 [2024-12-16 11:40:54.851416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef7100 00:20:49.653 [2024-12-16 11:40:54.852341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.653 [2024-12-16 11:40:54.852390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:49.653 [2024-12-16 11:40:54.865359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef31b8 00:20:49.653 [2024-12-16 11:40:54.866948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.653 [2024-12-16 11:40:54.866996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.653 [2024-12-16 11:40:54.876167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee5658 00:20:49.654 [2024-12-16 11:40:54.877428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.877478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.887081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef31b8 00:20:49.654 [2024-12-16 11:40:54.888341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.888389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.900459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef7100 00:20:49.654 [2024-12-16 11:40:54.902260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.902306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.908750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efeb58 00:20:49.654 [2024-12-16 11:40:54.909735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.909781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.922307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016edf550 00:20:49.654 [2024-12-16 11:40:54.923970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.924030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.932890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee12d8 00:20:49.654 [2024-12-16 11:40:54.934259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.934321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.944060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee1710 00:20:49.654 [2024-12-16 11:40:54.945427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.945478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.957580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee27f0 00:20:49.654 [2024-12-16 11:40:54.959578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.959626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.965796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eddc00 00:20:49.654 [2024-12-16 11:40:54.966865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.966913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.977240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee9e10 00:20:49.654 [2024-12-16 11:40:54.978333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.978381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:54.988153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee1710 00:20:49.654 [2024-12-16 11:40:54.989213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:54.989262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:55.001246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efbcf0 00:20:49.654 [2024-12-16 11:40:55.002863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:55.002911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:55.011671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef2948 00:20:49.654 [2024-12-16 11:40:55.013124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:55.013169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:55.022740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efd208 00:20:49.654 [2024-12-16 11:40:55.024193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:55.024243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:55.033226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eebfd0 00:20:49.654 [2024-12-16 11:40:55.034385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:55.034432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:55.044170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee3d08 00:20:49.654 [2024-12-16 11:40:55.045231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:55.045267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:49.654 [2024-12-16 11:40:55.058488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef6890 00:20:49.654 [2024-12-16 11:40:55.060401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.654 [2024-12-16 11:40:55.060439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.066744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef2510 00:20:49.911 [2024-12-16 11:40:55.067529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.067568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.080676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef7970 00:20:49.911 [2024-12-16 11:40:55.082261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.082312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.092356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef3e60 00:20:49.911 [2024-12-16 11:40:55.093494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.093544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.104442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eed4e8 00:20:49.911 [2024-12-16 11:40:55.105699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.105750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.118729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef81e0 00:20:49.911 [2024-12-16 11:40:55.120599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.120652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.127254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efe720 00:20:49.911 [2024-12-16 11:40:55.128246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.128306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.141205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee0a68 00:20:49.911 [2024-12-16 11:40:55.142772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.142823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.151895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef31b8 00:20:49.911 [2024-12-16 11:40:55.153198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.153248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.163377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeaab8 00:20:49.911 [2024-12-16 11:40:55.164751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.164799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:49.911 [2024-12-16 11:40:55.176896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6b70 00:20:49.911 [2024-12-16 11:40:55.178824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.911 [2024-12-16 11:40:55.178871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.185257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeaef0 00:20:49.912 [2024-12-16 11:40:55.186305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.186354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.199515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee99d8 00:20:49.912 [2024-12-16 11:40:55.201263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.201302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.210193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee1710 00:20:49.912 [2024-12-16 11:40:55.211573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.211638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.221618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee49b0 00:20:49.912 [2024-12-16 11:40:55.223014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.223072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.233468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef35f0 00:20:49.912 [2024-12-16 11:40:55.234826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.234878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.244554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef4b08 00:20:49.912 [2024-12-16 11:40:55.245811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.245861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.255985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ede038 00:20:49.912 [2024-12-16 11:40:55.257090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.257127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.270396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efbcf0 00:20:49.912 [2024-12-16 11:40:55.272169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.272222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.281885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee12d8 00:20:49.912 [2024-12-16 11:40:55.283362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.283404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.293335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eee190 00:20:49.912 [2024-12-16 11:40:55.294754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.294802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.304035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016efe2e8 00:20:49.912 [2024-12-16 11:40:55.305206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.305242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:49.912 [2024-12-16 11:40:55.315934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ede8a8 00:20:49.912 [2024-12-16 11:40:55.317141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.912 [2024-12-16 11:40:55.317190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:50.169 [2024-12-16 11:40:55.328764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef96f8 00:20:50.169 [2024-12-16 11:40:55.329921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.169 [2024-12-16 11:40:55.329973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:50.169 [2024-12-16 11:40:55.340139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eee190 00:20:50.169 [2024-12-16 11:40:55.341173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.169 [2024-12-16 11:40:55.341211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:50.169 [2024-12-16 11:40:55.353854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee6b70 00:20:50.169 [2024-12-16 11:40:55.355540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.169 [2024-12-16 11:40:55.355596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:50.169 [2024-12-16 11:40:55.365979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeaab8 00:20:50.169 [2024-12-16 11:40:55.367673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.169 [2024-12-16 11:40:55.367740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:50.169 [2024-12-16 11:40:55.377585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee01f8 00:20:50.169 [2024-12-16 11:40:55.379033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.169 [2024-12-16 11:40:55.379091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:50.169 [2024-12-16 11:40:55.388451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef0bc0 00:20:50.169 [2024-12-16 11:40:55.389811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.389861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.401469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee0a68 00:20:50.170 [2024-12-16 11:40:55.403192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.403229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.409615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eeff18 00:20:50.170 [2024-12-16 11:40:55.410462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.410510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.421549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef0ff8 00:20:50.170 [2024-12-16 11:40:55.422434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.422481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.434875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eea248 00:20:50.170 [2024-12-16 11:40:55.435943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.435996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.445912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee38d0 00:20:50.170 [2024-12-16 11:40:55.446835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.446883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.456823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eec408 00:20:50.170 [2024-12-16 11:40:55.457552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.457603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.470419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eee5c8 00:20:50.170 [2024-12-16 11:40:55.472541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.472590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.478990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016eec840 00:20:50.170 [2024-12-16 11:40:55.480080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.480121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.492670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ee9e10 00:20:50.170 [2024-12-16 11:40:55.494363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.494397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:50.170 [2024-12-16 11:40:55.502798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3fe60) with pdu=0x200016ef5378 00:20:50.170 [2024-12-16 11:40:55.504709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.170 [2024-12-16 11:40:55.504748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:50.170 21955.00 IOPS, 85.76 MiB/s 00:20:50.170 Latency(us) 00:20:50.170 [2024-12-16T11:40:55.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.170 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:50.170 nvme0n1 : 2.00 21977.59 85.85 0.00 0.00 5816.52 2263.97 14656.23 00:20:50.170 [2024-12-16T11:40:55.584Z] =================================================================================================================== 00:20:50.170 [2024-12-16T11:40:55.584Z] Total : 21977.59 85.85 0.00 0.00 5816.52 2263.97 14656.23 00:20:50.170 { 00:20:50.170 "results": [ 00:20:50.170 { 00:20:50.170 "job": "nvme0n1", 00:20:50.170 "core_mask": "0x2", 00:20:50.170 "workload": "randwrite", 00:20:50.170 "status": "finished", 00:20:50.170 "queue_depth": 128, 00:20:50.170 "io_size": 4096, 00:20:50.170 "runtime": 2.003768, 00:20:50.170 "iops": 21977.59421250364, 00:20:50.170 "mibps": 85.84997739259235, 00:20:50.170 "io_failed": 0, 00:20:50.170 "io_timeout": 0, 00:20:50.170 "avg_latency_us": 5816.524623279895, 00:20:50.170 "min_latency_us": 2263.970909090909, 00:20:50.170 "max_latency_us": 14656.232727272727 00:20:50.170 } 00:20:50.170 ], 00:20:50.170 "core_count": 1 00:20:50.170 } 00:20:50.170 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:50.170 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:50.170 | .driver_specific 00:20:50.170 | .nvme_error 00:20:50.170 | .status_code 00:20:50.170 | .command_transient_transport_error' 00:20:50.170 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:50.170 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96371 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96371 ']' 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96371 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96371 00:20:50.737 killing process with pid 96371 00:20:50.737 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.737 00:20:50.737 Latency(us) 00:20:50.737 [2024-12-16T11:40:56.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.737 [2024-12-16T11:40:56.151Z] =================================================================================================================== 00:20:50.737 [2024-12-16T11:40:56.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96371' 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96371 00:20:50.737 11:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96371 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96463 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96463 /var/tmp/bperf.sock 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96463 ']' 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:50.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.737 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.737 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:50.737 Zero copy mechanism will not be used. 00:20:50.737 [2024-12-16 11:40:56.137027] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:50.737 [2024-12-16 11:40:56.137152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96463 ] 00:20:50.995 [2024-12-16 11:40:56.287075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.995 [2024-12-16 11:40:56.346062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.254 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.254 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:51.254 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.254 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.511 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:51.511 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.511 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.511 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.511 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.511 11:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.769 nvme0n1 00:20:51.769 11:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:51.769 11:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.769 11:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.769 11:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.769 11:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:51.770 11:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:52.028 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:52.028 Zero copy mechanism will not be used. 00:20:52.028 Running I/O for 2 seconds... 00:20:52.028 [2024-12-16 11:40:57.244597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.028 [2024-12-16 11:40:57.244751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.028 [2024-12-16 11:40:57.244779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.028 [2024-12-16 11:40:57.249959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.250102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.250125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.254951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.255098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.255148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.259975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.260115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.260151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.264825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.264957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.264978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.269634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.269756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.269779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.274373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.274482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.274504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.279114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.279263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.279286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.283952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.284094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.284133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.288772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.288876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.288899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.293627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.293757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.293785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.298618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.298774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.298797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.303793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.303899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.303923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.308992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.309179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.309202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.314247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.314359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.314380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.319453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.319589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.319611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.324701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.324797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.324820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.329852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.329951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.329974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.334998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.335183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.335205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.340065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.340197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.340218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.345043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.345177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.345199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.350178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.350319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.350340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.355047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.355186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.355209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.359746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.359858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.359879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.364512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.364625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.364647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.369305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.369425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.369446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.374057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.374181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.374202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.378851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.029 [2024-12-16 11:40:57.378972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.029 [2024-12-16 11:40:57.378993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.029 [2024-12-16 11:40:57.383707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.383817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.388647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.388772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.388795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.393534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.393665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.393687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.398352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.398471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.398492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.403180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.403263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.403286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.407882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.408006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.408027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.412743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.412863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.412885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.417598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.417731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.417754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.422392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.422510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.422531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.427173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.427284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.427306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.431894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.432016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.432038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.030 [2024-12-16 11:40:57.436610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.030 [2024-12-16 11:40:57.436729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.030 [2024-12-16 11:40:57.436750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.441319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.441441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.441463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.446086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.446209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.446231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.450869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.450995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.451016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.455758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.455880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.455903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.460921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.461076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.461111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.465843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.465993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.466016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.471006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.471209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.471231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.476239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.476356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.476378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.481480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.481594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.481615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.486509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.486641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.486663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.491641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.491760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.491782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.496585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.290 [2024-12-16 11:40:57.496728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.290 [2024-12-16 11:40:57.496750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.290 [2024-12-16 11:40:57.501571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.501708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.501730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.506450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.506565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.506587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.511233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.511327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.511349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.516276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.516414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.516451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.521081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.521202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.521223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.527534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.527646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.527669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.532543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.532648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.532671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.537358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.537467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.537489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.542151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.542292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.542313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.546926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.547053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.547088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.551842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.551955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.551977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.556767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.556907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.556930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.561732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.561859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.561881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.566768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.566894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.566917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.571655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.571778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.571802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.576714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.576841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.576865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.581762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.581876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.581898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.586611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.586745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.586767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.591528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.591643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.591665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.596488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.596576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.596599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.601556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.601671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.601694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.606766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.606896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.606920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.611785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.611911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.611933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.616902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.617017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.617051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.622482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.622616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.622654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.627419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.627562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.627583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.632503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.632617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.291 [2024-12-16 11:40:57.632638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.291 [2024-12-16 11:40:57.637354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.291 [2024-12-16 11:40:57.637483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.637505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.642585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.642710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.642733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.647773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.647901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.647923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.652920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.653053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.653091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.657906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.658067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.662799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.662929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.662952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.667717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.667857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.667880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.672606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.672728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.672748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.677389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.677496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.677517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.682281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.682377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.682403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.687498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.687609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.687631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.692523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.692635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.692656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.292 [2024-12-16 11:40:57.697484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.292 [2024-12-16 11:40:57.697607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.292 [2024-12-16 11:40:57.697630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.552 [2024-12-16 11:40:57.702450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.552 [2024-12-16 11:40:57.702563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.552 [2024-12-16 11:40:57.702584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.552 [2024-12-16 11:40:57.707219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.552 [2024-12-16 11:40:57.707338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.552 [2024-12-16 11:40:57.707360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.552 [2024-12-16 11:40:57.712035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.552 [2024-12-16 11:40:57.712155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.552 [2024-12-16 11:40:57.712178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.552 [2024-12-16 11:40:57.717062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.552 [2024-12-16 11:40:57.717177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.552 [2024-12-16 11:40:57.717198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.552 [2024-12-16 11:40:57.722178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.552 [2024-12-16 11:40:57.722273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.552 [2024-12-16 11:40:57.722294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.552 [2024-12-16 11:40:57.726985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.552 [2024-12-16 11:40:57.727149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.552 [2024-12-16 11:40:57.727171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.552 [2024-12-16 11:40:57.731968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.552 [2024-12-16 11:40:57.732114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.552 [2024-12-16 11:40:57.732136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.736853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.736941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.736964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.741868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.742005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.742027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.746859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.746994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.747016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.751903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.752019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.752041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.756758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.756859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.756880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.761713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.761835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.761857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.766841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.766968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.766989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.771792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.771923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.771945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.776885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.776980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.777001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.782232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.782316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.782338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.787545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.787696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.792724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.792832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.792854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.797942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.798028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.798052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.803094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.803205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.803227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.808397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.808498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.808520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.813445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.813540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.813562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.818607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.818708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.818731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.823786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.823867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.823889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.829098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.829208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.829231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.834268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.834375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.834397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.839410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.839532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.839553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.844509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.844645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.844668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.849620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.849700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.849723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.854643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.854757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.854778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.859795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.859908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.859929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.864755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.864861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.553 [2024-12-16 11:40:57.864881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.553 [2024-12-16 11:40:57.870235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.553 [2024-12-16 11:40:57.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.870356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.875249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.875360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.875383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.880388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.880471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.880492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.885441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.885540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.885562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.890568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.890713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.890734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.895868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.895970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.895992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.901000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.901170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.901192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.906243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.906358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.906380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.911325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.911404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.911455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.916580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.916670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.916693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.921674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.921774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.921796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.926928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.927060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.927082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.932245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.932368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.932391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.937123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.937236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.937258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.942118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.942218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.942240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.946828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.946934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.946954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.951694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.951788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.951809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.956890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.956977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.957000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.554 [2024-12-16 11:40:57.961764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.554 [2024-12-16 11:40:57.961867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.554 [2024-12-16 11:40:57.961905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:57.966776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:57.966877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:57.966899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:57.972069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:57.972163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:57.972202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:57.977048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:57.977196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:57.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:57.982114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:57.982216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:57.982237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:57.987220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:57.987322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:57.987346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:57.991973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:57.992096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:57.992137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:57.997054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:57.997171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:57.997194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.002122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.002284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.002327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.007104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.007250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.007272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.012289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.012391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.012412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.017538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.017667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.017689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.022550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.022674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.022695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.027833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.027966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.027988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.032673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.032782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.032804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.038069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.814 [2024-12-16 11:40:58.038173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.814 [2024-12-16 11:40:58.038197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.814 [2024-12-16 11:40:58.043445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.043551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.043573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.048526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.048650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.048672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.053386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.053515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.053536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.058370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.058500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.058521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.063336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.063443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.063479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.068163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.068276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.068299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.073210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.073332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.078222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.078314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.078341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.083043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.083187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.083210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.088236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.088367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.088388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.093513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.093620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.093642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.098433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.098546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.098567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.103695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.103811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.103833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.109056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.109173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.109196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.114471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.114577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.114599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.119812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.119958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.119981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.125197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.125309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.125331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.130388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.130540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.130563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.135656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.135773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.135797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.140554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.140678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.140701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.145416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.145556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.145578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.150615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.150732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.150754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.155740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.155884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.155912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.161036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.161199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.161221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.166109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.166217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.166239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.171276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.171376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.171400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.176408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.176541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.176564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.181318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.815 [2024-12-16 11:40:58.181426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.815 [2024-12-16 11:40:58.181448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.815 [2024-12-16 11:40:58.186445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.186551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.186573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.816 [2024-12-16 11:40:58.191351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.191478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.191514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.816 [2024-12-16 11:40:58.196505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.196636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.196659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.816 [2024-12-16 11:40:58.201851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.201938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.201962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.816 [2024-12-16 11:40:58.206749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.206847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.206869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.816 [2024-12-16 11:40:58.211984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.212152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.212173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.816 [2024-12-16 11:40:58.217174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.217336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.217358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.816 [2024-12-16 11:40:58.222288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:52.816 [2024-12-16 11:40:58.222419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.816 [2024-12-16 11:40:58.222442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.077 [2024-12-16 11:40:58.227174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.077 [2024-12-16 11:40:58.227266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.077 [2024-12-16 11:40:58.227289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.077 [2024-12-16 11:40:58.232409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.077 [2024-12-16 11:40:58.232538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.077 [2024-12-16 11:40:58.232559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.077 6107.00 IOPS, 763.38 MiB/s [2024-12-16T11:40:58.491Z] [2024-12-16 11:40:58.239012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.077 [2024-12-16 11:40:58.239191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.077 [2024-12-16 11:40:58.239214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.077 [2024-12-16 11:40:58.244306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.244411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.244435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.249359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.249474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.249497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.254678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.254783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.254806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.259977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.260127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.260150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.264715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.264835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.264856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.269994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.270177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.270201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.274982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.275108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.275153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.279841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.279963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.279986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.285330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.285424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.285445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.290222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.290337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.290359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.295203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.295316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.295338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.300407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.300533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.300556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.305424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.305540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.305563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.310349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.310447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.310469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.315412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.315574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.315597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.320729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.320806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.320831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.325825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.325909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.325933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.331289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.331365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.331389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.336554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.336685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.336712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.078 [2024-12-16 11:40:58.341775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.078 [2024-12-16 11:40:58.341862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.078 [2024-12-16 11:40:58.341885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.347061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.347198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.347221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.352405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.352503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.352526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.357578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.357683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.357705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.362927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.363025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.363059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.368190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.368305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.368326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.373045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.373164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.373187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.378179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.378277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.378314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.383409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.383514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.383535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.388097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.388204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.388225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.392748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.392827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.392847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.397859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.397980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.398001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.402707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.402810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.402847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.408193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.408285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.408306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.413342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.413429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.413452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.418223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.418325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.418346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.423396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.423522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.423543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.428097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.428179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.428201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.432840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.432937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.438099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.438208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.438231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.443385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.079 [2024-12-16 11:40:58.443461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.079 [2024-12-16 11:40:58.443485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.079 [2024-12-16 11:40:58.448285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.448376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.448398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.080 [2024-12-16 11:40:58.453389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.453471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.453494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.080 [2024-12-16 11:40:58.458549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.458662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.458684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.080 [2024-12-16 11:40:58.463803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.463887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.463910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.080 [2024-12-16 11:40:58.468942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.469006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.469055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.080 [2024-12-16 11:40:58.474105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.474206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.474228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.080 [2024-12-16 11:40:58.478740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.478838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.478859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.080 [2024-12-16 11:40:58.483800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.080 [2024-12-16 11:40:58.483887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.080 [2024-12-16 11:40:58.483909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.489100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.489208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.489231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.494007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.494127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.494150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.499258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.499346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.499369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.504347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.504458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.504480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.509599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.509694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.509716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.514836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.514936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.514958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.519756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.519857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.519879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.524817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.524922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.524943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.530112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.530247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.530268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.535375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.535502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.535523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.540397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.540489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.540511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.545418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.545517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.349 [2024-12-16 11:40:58.545539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.349 [2024-12-16 11:40:58.550622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.349 [2024-12-16 11:40:58.550756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.550778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.555738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.555847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.555869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.560924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.561063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.561087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.565745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.565844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.565866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.570978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.571115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.571162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.576156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.576286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.576308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.581614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.581712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.581735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.586460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.586558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.586580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.591409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.591535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.591555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.596510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.596623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.596644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.601124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.601221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.601242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.606188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.606284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.606321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.611435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.611592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.611612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.616296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.616397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.616420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.621485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.621583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.621604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.626518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.626617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.626641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.631841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.631939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.631960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.637152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.637397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.637432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.642255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.642350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.642374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.647193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.647281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.647304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.652397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.652491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.652513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.657424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.657522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.657544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.662561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.662699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.662722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.667773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.667872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.667911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.672944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.673044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.673066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.678128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.678241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.678279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.683037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.683165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.683189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.688453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.688588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.350 [2024-12-16 11:40:58.688610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.350 [2024-12-16 11:40:58.693589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.350 [2024-12-16 11:40:58.693691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.693714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.698513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.698609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.698632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.703829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.703925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.703947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.708949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.709053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.709076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.714010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.714141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.714164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.719236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.719347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.719370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.724127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.724214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.724237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.729284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.729375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.729396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.734299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.734382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.734405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.739318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.739401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.739425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.744510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.744637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.744659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.749566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.749666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.749687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.754556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.754654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.754676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.351 [2024-12-16 11:40:58.759824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.351 [2024-12-16 11:40:58.759947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.351 [2024-12-16 11:40:58.759969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.765030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.765140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.765161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.769828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.769925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.769947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.774705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.774797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.774818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.779985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.780101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.780134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.784849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.784966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.784989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.790255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.790363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.790384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.795423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.795505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.795529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.800270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.800382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.800403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.805125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.805217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.805239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.810479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.810571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.810593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.815238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.815308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.815330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.820451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.820542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.820564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.825679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.825767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.825790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.830963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.831155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.831178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.836233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.836315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.616 [2024-12-16 11:40:58.836337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.616 [2024-12-16 11:40:58.841247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.616 [2024-12-16 11:40:58.841321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.841343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.846401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.846492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.846515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.851706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.851791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.851813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.857081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.857177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.857210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.862521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.862621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.862660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.867743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.867877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.867900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.872907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.873006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.873043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.878162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.878260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.878281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.883370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.883468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.883491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.888479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.888571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.888591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.893343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.893467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.893488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.898353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.898463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.898485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.903584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.903687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.903710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.909054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.909165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.909187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.914256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.914341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.914365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.919256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.919350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.919375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.924394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.924506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.924528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.929757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.929846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.929872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.935319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.935470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.935503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.940542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.940655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.940680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.945890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.945995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.946044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.951251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.951356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.951381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.956330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.956419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.956443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.961466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.961587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.961610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.966679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.966797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.966821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.971758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.971869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.971900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.977395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.977519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.977544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.982682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.982766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.617 [2024-12-16 11:40:58.982793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.617 [2024-12-16 11:40:58.987975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.617 [2024-12-16 11:40:58.988114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:58.988138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.618 [2024-12-16 11:40:58.993429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.618 [2024-12-16 11:40:58.993507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:58.993534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.618 [2024-12-16 11:40:58.998406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.618 [2024-12-16 11:40:58.998515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:58.998538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.618 [2024-12-16 11:40:59.003565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.618 [2024-12-16 11:40:59.003660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:59.003683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.618 [2024-12-16 11:40:59.008764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.618 [2024-12-16 11:40:59.008866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:59.008890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.618 [2024-12-16 11:40:59.013896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.618 [2024-12-16 11:40:59.014017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:59.014042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.618 [2024-12-16 11:40:59.019247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.618 [2024-12-16 11:40:59.019364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:59.019388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.618 [2024-12-16 11:40:59.024471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.618 [2024-12-16 11:40:59.024576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.618 [2024-12-16 11:40:59.024603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.029811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.029895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.029919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.035199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.035291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.035315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.040395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.040526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.040552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.045651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.045768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.045793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.051509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.051689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.051741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.056685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.056790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.056817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.062154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.062299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.062341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.067438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.067638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.067665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.072688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.072788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.072814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.078089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.078209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.078234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.083401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.083494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.083518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.088672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.088766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.088797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.093950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.094124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.094178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.099291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.877 [2024-12-16 11:40:59.099409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.877 [2024-12-16 11:40:59.099442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.877 [2024-12-16 11:40:59.104319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.104420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.104445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.109729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.109828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.109859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.115007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.115152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.115176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.120176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.120298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.120338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.125178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.125302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.125339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.130241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.130346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.130368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.135348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.135428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.135452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.140614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.140719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.140744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.145859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.145988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.146011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.151014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.151180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.151204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.155891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.155988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.156010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.161194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.161294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.161332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.166198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.166342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.166365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.171309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.171407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.171430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.176431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.176536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.176557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.181452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.181550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.181572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.186633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.186727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.186748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.192018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.192151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.192187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.197512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.197609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.197647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.202516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.202613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.202667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.207655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.207748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.207770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.212295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.212390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.212411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.217233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.217391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.217413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.222354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.222450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.222471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.227689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.227827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.227849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.878 [2024-12-16 11:40:59.232961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.233069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.233090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.878 6058.00 IOPS, 757.25 MiB/s [2024-12-16T11:40:59.292Z] [2024-12-16 11:40:59.239380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb401a0) with pdu=0x200016eff3c8 00:20:53.878 [2024-12-16 11:40:59.239466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.878 [2024-12-16 11:40:59.239489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.878 00:20:53.878 Latency(us) 00:20:53.878 [2024-12-16T11:40:59.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.878 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:53.879 nvme0n1 : 2.00 6054.83 756.85 0.00 0.00 2636.22 2070.34 10485.76 00:20:53.879 [2024-12-16T11:40:59.293Z] =================================================================================================================== 00:20:53.879 [2024-12-16T11:40:59.293Z] Total : 6054.83 756.85 0.00 0.00 2636.22 2070.34 10485.76 00:20:53.879 { 00:20:53.879 "results": [ 00:20:53.879 { 00:20:53.879 "job": "nvme0n1", 00:20:53.879 "core_mask": "0x2", 00:20:53.879 "workload": "randwrite", 00:20:53.879 "status": "finished", 00:20:53.879 "queue_depth": 16, 00:20:53.879 "io_size": 131072, 00:20:53.879 "runtime": 2.003689, 00:20:53.879 "iops": 6054.83186262938, 00:20:53.879 "mibps": 756.8539828286725, 00:20:53.879 "io_failed": 0, 00:20:53.879 "io_timeout": 0, 00:20:53.879 "avg_latency_us": 2636.219754818212, 00:20:53.879 "min_latency_us": 2070.3418181818183, 00:20:53.879 "max_latency_us": 10485.76 00:20:53.879 } 00:20:53.879 ], 00:20:53.879 "core_count": 1 00:20:53.879 } 00:20:53.879 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:53.879 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:53.879 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:53.879 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:53.879 | .driver_specific 00:20:53.879 | .nvme_error 00:20:53.879 | .status_code 00:20:53.879 | .command_transient_transport_error' 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 392 > 0 )) 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96463 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96463 ']' 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96463 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96463 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:54.445 killing process with pid 96463 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96463' 00:20:54.445 Received shutdown signal, test time was about 2.000000 seconds 00:20:54.445 00:20:54.445 Latency(us) 00:20:54.445 [2024-12-16T11:40:59.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.445 [2024-12-16T11:40:59.859Z] =================================================================================================================== 00:20:54.445 [2024-12-16T11:40:59.859Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96463 00:20:54.445 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96463 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 96175 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96175 ']' 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96175 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96175 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96175' 00:20:54.704 killing process with pid 96175 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96175 00:20:54.704 11:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96175 00:20:54.704 00:20:54.704 real 0m17.272s 00:20:54.704 user 0m33.140s 00:20:54.704 sys 0m4.601s 00:20:54.704 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.704 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.704 ************************************ 00:20:54.704 END TEST nvmf_digest_error 00:20:54.704 ************************************ 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.962 rmmod nvme_tcp 00:20:54.962 rmmod nvme_fabrics 00:20:54.962 rmmod nvme_keyring 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 96175 ']' 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 96175 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 96175 ']' 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 96175 00:20:54.962 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96175) - No such process 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 96175 is not found' 00:20:54.962 Process with pid 96175 is not found 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:54.962 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:55.221 00:20:55.221 real 0m34.000s 00:20:55.221 user 1m3.729s 00:20:55.221 sys 0m9.297s 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:55.221 ************************************ 00:20:55.221 END TEST nvmf_digest 00:20:55.221 ************************************ 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.221 ************************************ 00:20:55.221 START TEST nvmf_mdns_discovery 00:20:55.221 ************************************ 00:20:55.221 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:55.480 * Looking for test storage... 00:20:55.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:55.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.480 --rc genhtml_branch_coverage=1 00:20:55.480 --rc genhtml_function_coverage=1 00:20:55.480 --rc genhtml_legend=1 00:20:55.480 --rc geninfo_all_blocks=1 00:20:55.480 --rc geninfo_unexecuted_blocks=1 00:20:55.480 00:20:55.480 ' 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:55.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.480 --rc genhtml_branch_coverage=1 00:20:55.480 --rc genhtml_function_coverage=1 00:20:55.480 --rc genhtml_legend=1 00:20:55.480 --rc geninfo_all_blocks=1 00:20:55.480 --rc geninfo_unexecuted_blocks=1 00:20:55.480 00:20:55.480 ' 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:55.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.480 --rc genhtml_branch_coverage=1 00:20:55.480 --rc genhtml_function_coverage=1 00:20:55.480 --rc genhtml_legend=1 00:20:55.480 --rc geninfo_all_blocks=1 00:20:55.480 --rc geninfo_unexecuted_blocks=1 00:20:55.480 00:20:55.480 ' 00:20:55.480 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:55.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.481 --rc genhtml_branch_coverage=1 00:20:55.481 --rc genhtml_function_coverage=1 00:20:55.481 --rc genhtml_legend=1 00:20:55.481 --rc geninfo_all_blocks=1 00:20:55.481 --rc geninfo_unexecuted_blocks=1 00:20:55.481 00:20:55.481 ' 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.481 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:55.481 Cannot find device "nvmf_init_br" 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:55.481 Cannot find device "nvmf_init_br2" 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:55.481 Cannot find device "nvmf_tgt_br" 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.481 Cannot find device "nvmf_tgt_br2" 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:55.481 Cannot find device "nvmf_init_br" 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:55.481 Cannot find device "nvmf_init_br2" 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:55.481 Cannot find device "nvmf_tgt_br" 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:20:55.481 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:55.481 Cannot find device "nvmf_tgt_br2" 00:20:55.482 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:20:55.482 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:55.482 Cannot find device "nvmf_br" 00:20:55.482 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:20:55.482 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:55.740 Cannot find device "nvmf_init_if" 00:20:55.740 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:20:55.740 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:55.740 Cannot find device "nvmf_init_if2" 00:20:55.740 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:20:55.740 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.740 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:55.741 11:41:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:55.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:20:55.741 00:20:55.741 --- 10.0.0.3 ping statistics --- 00:20:55.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.741 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:55.741 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:55.741 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:20:55.741 00:20:55.741 --- 10.0.0.4 ping statistics --- 00:20:55.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.741 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:55.741 00:20:55.741 --- 10.0.0.1 ping statistics --- 00:20:55.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.741 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:55.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:55.741 00:20:55.741 --- 10.0.0.2 ping statistics --- 00:20:55.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.741 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=96804 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 96804 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 96804 ']' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:55.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.741 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.000 [2024-12-16 11:41:01.203653] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:56.000 [2024-12-16 11:41:01.203761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.000 [2024-12-16 11:41:01.358920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.258 [2024-12-16 11:41:01.424910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.258 [2024-12-16 11:41:01.424987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.258 [2024-12-16 11:41:01.425043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.258 [2024-12-16 11:41:01.425056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.258 [2024-12-16 11:41:01.425065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.258 [2024-12-16 11:41:01.425527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.258 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 [2024-12-16 11:41:01.703359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 [2024-12-16 11:41:01.711509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 null0 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 null1 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 null2 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 null3 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=96839 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 96839 /tmp/host.sock 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 96839 ']' 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.517 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.517 11:41:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:56.517 [2024-12-16 11:41:01.823643] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:20:56.517 [2024-12-16 11:41:01.823771] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96839 ] 00:20:56.776 [2024-12-16 11:41:01.976838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.776 [2024-12-16 11:41:02.043064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.776 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.776 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:56.776 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:56.776 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:56.776 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:57.035 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=96857 00:20:57.035 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:57.035 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:57.035 11:41:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:57.035 Process 1061 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:57.035 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:57.035 Successfully dropped root privileges. 00:20:57.035 avahi-daemon 0.8 starting up. 00:20:57.035 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:57.035 Successfully called chroot(). 00:20:57.035 Successfully dropped remaining capabilities. 00:20:57.970 No service file found in /etc/avahi/services. 00:20:57.970 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:20:57.970 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:57.970 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:20:57.970 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:57.970 Network interface enumeration completed. 00:20:57.970 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:20:57.970 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:20:57.970 Registering new address record for fe80::9474:62ff:fe99:33ad on nvmf_tgt_if.*. 00:20:57.970 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:20:57.970 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1496178699. 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:57.970 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.229 [2024-12-16 11:41:03.601086] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:58.229 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 [2024-12-16 11:41:03.672059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 11:41:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:20:59.422 [2024-12-16 11:41:04.501094] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:59.680 [2024-12-16 11:41:04.901109] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:59.680 [2024-12-16 11:41:04.901147] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:20:59.680 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:59.680 cookie is 0 00:20:59.680 is_local: 1 00:20:59.680 our_own: 0 00:20:59.680 wide_area: 0 00:20:59.680 multicast: 1 00:20:59.680 cached: 1 00:20:59.681 [2024-12-16 11:41:05.001103] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:59.681 [2024-12-16 11:41:05.001141] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:20:59.681 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:59.681 cookie is 0 00:20:59.681 is_local: 1 00:20:59.681 our_own: 0 00:20:59.681 wide_area: 0 00:20:59.681 multicast: 1 00:20:59.681 cached: 1 00:21:00.614 [2024-12-16 11:41:05.902275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-12-16 11:41:05.902356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b850 with addr=10.0.0.4, port=8009 00:21:00.614 [2024-12-16 11:41:05.902410] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:00.614 [2024-12-16 11:41:05.902431] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:00.614 [2024-12-16 11:41:05.902442] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:00.614 [2024-12-16 11:41:06.008362] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:00.614 [2024-12-16 11:41:06.008395] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:00.614 [2024-12-16 11:41:06.008434] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:00.872 [2024-12-16 11:41:06.094459] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:21:00.872 [2024-12-16 11:41:06.148918] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:00.872 [2024-12-16 11:41:06.149757] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1770a10:1 started. 00:21:00.872 [2024-12-16 11:41:06.151699] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:00.872 [2024-12-16 11:41:06.151727] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:00.872 [2024-12-16 11:41:06.156536] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1770a10 was disconnected and freed. delete nvme_qpair. 00:21:01.808 [2024-12-16 11:41:06.902204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.808 [2024-12-16 11:41:06.902275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1770810 with addr=10.0.0.4, port=8009 00:21:01.808 [2024-12-16 11:41:06.902302] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:01.808 [2024-12-16 11:41:06.902313] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:01.808 [2024-12-16 11:41:06.902323] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:02.744 [2024-12-16 11:41:07.902205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.744 [2024-12-16 11:41:07.902294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1759560 with addr=10.0.0.4, port=8009 00:21:02.744 [2024-12-16 11:41:07.902321] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:02.744 [2024-12-16 11:41:07.902332] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:02.744 [2024-12-16 11:41:07.902342] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:03.679 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:03.679 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:03.679 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:03.679 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.680 [2024-12-16 11:41:08.758290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:21:03.680 [2024-12-16 11:41:08.761769] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:03.680 [2024-12-16 11:41:08.761824] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.680 [2024-12-16 11:41:08.766231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:21:03.680 [2024-12-16 11:41:08.766755] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.680 11:41:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:21:03.680 [2024-12-16 11:41:08.897864] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:03.680 [2024-12-16 11:41:08.897920] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:03.680 [2024-12-16 11:41:08.908634] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:03.680 [2024-12-16 11:41:08.908677] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:03.680 [2024-12-16 11:41:08.908694] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:03.680 [2024-12-16 11:41:08.983808] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:03.680 [2024-12-16 11:41:08.994776] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:21:03.680 [2024-12-16 11:41:09.049234] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:21:03.680 [2024-12-16 11:41:09.049890] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x176dce0:1 started. 00:21:03.680 [2024-12-16 11:41:09.051547] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:03.680 [2024-12-16 11:41:09.051573] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:03.680 [2024-12-16 11:41:09.057160] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x176dce0 was disconnected and freed. delete nvme_qpair. 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:04.617 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:04.617 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:04.617 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:04.617 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:04.617 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:04.617 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:04.617 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.617 11:41:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:04.617 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.875 [2024-12-16 11:41:10.185416] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1775b30:1 started. 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.875 [2024-12-16 11:41:10.188581] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1775b30 was disconnected and freed. delete nvme_qpair. 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.875 11:41:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:21:04.875 [2024-12-16 11:41:10.200734] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x176f700:1 started. 00:21:04.875 [2024-12-16 11:41:10.207309] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x176f700 was disconnected and freed. delete nvme_qpair. 00:21:05.134 [2024-12-16 11:41:10.303038] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:05.134 [2024-12-16 11:41:10.303103] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:05.134 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:05.134 cookie is 0 00:21:05.134 is_local: 1 00:21:05.134 our_own: 0 00:21:05.134 wide_area: 0 00:21:05.134 multicast: 1 00:21:05.134 cached: 1 00:21:05.134 [2024-12-16 11:41:10.303116] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:05.134 [2024-12-16 11:41:10.403015] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:05.134 [2024-12-16 11:41:10.403050] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:05.134 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:05.134 cookie is 0 00:21:05.134 is_local: 1 00:21:05.134 our_own: 0 00:21:05.134 wide_area: 0 00:21:05.134 multicast: 1 00:21:05.134 cached: 1 00:21:05.134 [2024-12-16 11:41:10.403063] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.134 [2024-12-16 11:41:11.324023] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:06.134 [2024-12-16 11:41:11.324493] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:06.134 [2024-12-16 11:41:11.324533] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:06.134 [2024-12-16 11:41:11.324572] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:06.134 [2024-12-16 11:41:11.324587] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.134 [2024-12-16 11:41:11.331947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:21:06.134 [2024-12-16 11:41:11.332482] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:06.134 [2024-12-16 11:41:11.332542] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.134 11:41:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:21:06.134 [2024-12-16 11:41:11.463664] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:21:06.134 [2024-12-16 11:41:11.464170] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:21:06.134 [2024-12-16 11:41:11.522213] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:21:06.134 [2024-12-16 11:41:11.522287] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:06.134 [2024-12-16 11:41:11.522298] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:06.134 [2024-12-16 11:41:11.522304] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:06.134 [2024-12-16 11:41:11.522322] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:06.134 [2024-12-16 11:41:11.522527] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:21:06.134 [2024-12-16 11:41:11.522561] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:06.134 [2024-12-16 11:41:11.522569] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:06.134 [2024-12-16 11:41:11.522574] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:06.135 [2024-12-16 11:41:11.522589] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:06.393 [2024-12-16 11:41:11.567777] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:06.393 [2024-12-16 11:41:11.567808] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:06.393 [2024-12-16 11:41:11.567866] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:06.393 [2024-12-16 11:41:11.567880] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:06.959 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.218 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.479 [2024-12-16 11:41:12.649344] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:07.479 [2024-12-16 11:41:12.649383] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:07.479 [2024-12-16 11:41:12.649420] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:07.479 [2024-12-16 11:41:12.649436] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.479 [2024-12-16 11:41:12.655386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.655430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.655444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.655454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.655464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.655473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.655483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.655492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.655502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.479 [2024-12-16 11:41:12.657329] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:07.479 [2024-12-16 11:41:12.657385] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:07.479 [2024-12-16 11:41:12.659866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.659898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.659910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.659919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.659929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.659938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.659949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.479 [2024-12-16 11:41:12.659958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.479 [2024-12-16 11:41:12.659966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.479 11:41:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:21:07.479 [2024-12-16 11:41:12.665344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.479 [2024-12-16 11:41:12.669837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.479 [2024-12-16 11:41:12.675360] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.479 [2024-12-16 11:41:12.675386] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.479 [2024-12-16 11:41:12.675393] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.479 [2024-12-16 11:41:12.675399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.479 [2024-12-16 11:41:12.675432] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.479 [2024-12-16 11:41:12.675523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.479 [2024-12-16 11:41:12.675545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.479 [2024-12-16 11:41:12.675556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.479 [2024-12-16 11:41:12.675573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.479 [2024-12-16 11:41:12.675588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.479 [2024-12-16 11:41:12.675597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.479 [2024-12-16 11:41:12.675608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.479 [2024-12-16 11:41:12.675616] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.479 [2024-12-16 11:41:12.675622] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.479 [2024-12-16 11:41:12.675628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.480 [2024-12-16 11:41:12.679843] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.480 [2024-12-16 11:41:12.679868] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.480 [2024-12-16 11:41:12.679875] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.679880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.480 [2024-12-16 11:41:12.679907] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.679963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.480 [2024-12-16 11:41:12.679983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.480 [2024-12-16 11:41:12.679994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.480 [2024-12-16 11:41:12.680038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.480 [2024-12-16 11:41:12.680054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.480 [2024-12-16 11:41:12.680062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.480 [2024-12-16 11:41:12.680071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.480 [2024-12-16 11:41:12.680079] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.480 [2024-12-16 11:41:12.680084] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.480 [2024-12-16 11:41:12.680089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.480 [2024-12-16 11:41:12.685441] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.480 [2024-12-16 11:41:12.685463] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.480 [2024-12-16 11:41:12.685485] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.685490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.480 [2024-12-16 11:41:12.685531] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.685582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.480 [2024-12-16 11:41:12.685601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.480 [2024-12-16 11:41:12.685611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.480 [2024-12-16 11:41:12.685627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.480 [2024-12-16 11:41:12.685640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.480 [2024-12-16 11:41:12.685648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.480 [2024-12-16 11:41:12.685657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.480 [2024-12-16 11:41:12.685664] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.480 [2024-12-16 11:41:12.685670] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.480 [2024-12-16 11:41:12.685674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.480 [2024-12-16 11:41:12.689918] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.480 [2024-12-16 11:41:12.689940] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.480 [2024-12-16 11:41:12.689962] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.689967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.480 [2024-12-16 11:41:12.690007] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.690068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.480 [2024-12-16 11:41:12.690089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.480 [2024-12-16 11:41:12.690099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.480 [2024-12-16 11:41:12.690114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.480 [2024-12-16 11:41:12.690127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.480 [2024-12-16 11:41:12.690135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.480 [2024-12-16 11:41:12.690144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.480 [2024-12-16 11:41:12.690152] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.480 [2024-12-16 11:41:12.690157] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.480 [2024-12-16 11:41:12.690162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.480 [2024-12-16 11:41:12.695550] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.480 [2024-12-16 11:41:12.695574] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.480 [2024-12-16 11:41:12.695595] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.695600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.480 [2024-12-16 11:41:12.695641] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.695690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.480 [2024-12-16 11:41:12.695708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.480 [2024-12-16 11:41:12.695718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.480 [2024-12-16 11:41:12.695733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.480 [2024-12-16 11:41:12.695746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.480 [2024-12-16 11:41:12.695755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.480 [2024-12-16 11:41:12.695764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.480 [2024-12-16 11:41:12.695771] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.480 [2024-12-16 11:41:12.695776] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.480 [2024-12-16 11:41:12.695781] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.480 [2024-12-16 11:41:12.700001] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.480 [2024-12-16 11:41:12.700065] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.480 [2024-12-16 11:41:12.700072] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.700077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.480 [2024-12-16 11:41:12.700101] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.480 [2024-12-16 11:41:12.700166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.480 [2024-12-16 11:41:12.700188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.480 [2024-12-16 11:41:12.700197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.480 [2024-12-16 11:41:12.700213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.481 [2024-12-16 11:41:12.700227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.481 [2024-12-16 11:41:12.700235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.481 [2024-12-16 11:41:12.700244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.481 [2024-12-16 11:41:12.700251] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.481 [2024-12-16 11:41:12.700257] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.481 [2024-12-16 11:41:12.700262] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.481 [2024-12-16 11:41:12.705634] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.481 [2024-12-16 11:41:12.705661] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.481 [2024-12-16 11:41:12.705683] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.705688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.481 [2024-12-16 11:41:12.705732] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.705816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.481 [2024-12-16 11:41:12.705836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.481 [2024-12-16 11:41:12.705847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.481 [2024-12-16 11:41:12.705862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.481 [2024-12-16 11:41:12.705876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.481 [2024-12-16 11:41:12.705884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.481 [2024-12-16 11:41:12.705893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.481 [2024-12-16 11:41:12.705901] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.481 [2024-12-16 11:41:12.705906] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.481 [2024-12-16 11:41:12.705911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.481 [2024-12-16 11:41:12.710112] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.481 [2024-12-16 11:41:12.710138] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.481 [2024-12-16 11:41:12.710144] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.710149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.481 [2024-12-16 11:41:12.710191] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.710245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.481 [2024-12-16 11:41:12.710264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.481 [2024-12-16 11:41:12.710274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.481 [2024-12-16 11:41:12.710289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.481 [2024-12-16 11:41:12.710302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.481 [2024-12-16 11:41:12.710310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.481 [2024-12-16 11:41:12.710319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.481 [2024-12-16 11:41:12.710343] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.481 [2024-12-16 11:41:12.710348] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.481 [2024-12-16 11:41:12.710353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.481 [2024-12-16 11:41:12.715726] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.481 [2024-12-16 11:41:12.715751] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.481 [2024-12-16 11:41:12.715773] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.715778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.481 [2024-12-16 11:41:12.715819] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.715870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.481 [2024-12-16 11:41:12.715889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.481 [2024-12-16 11:41:12.715899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.481 [2024-12-16 11:41:12.715915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.481 [2024-12-16 11:41:12.715928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.481 [2024-12-16 11:41:12.715936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.481 [2024-12-16 11:41:12.715945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.481 [2024-12-16 11:41:12.715953] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.481 [2024-12-16 11:41:12.715958] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.481 [2024-12-16 11:41:12.715963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.481 [2024-12-16 11:41:12.720184] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.481 [2024-12-16 11:41:12.720208] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.481 [2024-12-16 11:41:12.720214] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.720220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.481 [2024-12-16 11:41:12.720246] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.720296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.481 [2024-12-16 11:41:12.720315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.481 [2024-12-16 11:41:12.720324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.481 [2024-12-16 11:41:12.720339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.481 [2024-12-16 11:41:12.720367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.481 [2024-12-16 11:41:12.720375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.481 [2024-12-16 11:41:12.720383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.481 [2024-12-16 11:41:12.720391] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.481 [2024-12-16 11:41:12.720413] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.481 [2024-12-16 11:41:12.720418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.481 [2024-12-16 11:41:12.725814] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.481 [2024-12-16 11:41:12.725837] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.481 [2024-12-16 11:41:12.725858] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.725863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.481 [2024-12-16 11:41:12.725888] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.481 [2024-12-16 11:41:12.725952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.481 [2024-12-16 11:41:12.725972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.481 [2024-12-16 11:41:12.725982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.481 [2024-12-16 11:41:12.725996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.481 [2024-12-16 11:41:12.726010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.481 [2024-12-16 11:41:12.726018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.481 [2024-12-16 11:41:12.726040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.481 [2024-12-16 11:41:12.726049] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.481 [2024-12-16 11:41:12.726054] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.481 [2024-12-16 11:41:12.726059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.481 [2024-12-16 11:41:12.730254] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.481 [2024-12-16 11:41:12.730276] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.481 [2024-12-16 11:41:12.730282] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.730287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.482 [2024-12-16 11:41:12.730327] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.730375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.482 [2024-12-16 11:41:12.730393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.482 [2024-12-16 11:41:12.730403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.482 [2024-12-16 11:41:12.730417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.482 [2024-12-16 11:41:12.730430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.482 [2024-12-16 11:41:12.730438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.482 [2024-12-16 11:41:12.730446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.482 [2024-12-16 11:41:12.730454] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.482 [2024-12-16 11:41:12.730459] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.482 [2024-12-16 11:41:12.730480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.482 [2024-12-16 11:41:12.735900] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.482 [2024-12-16 11:41:12.735926] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.482 [2024-12-16 11:41:12.735932] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.735937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.482 [2024-12-16 11:41:12.735961] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.736020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.482 [2024-12-16 11:41:12.736041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.482 [2024-12-16 11:41:12.736051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.482 [2024-12-16 11:41:12.736066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.482 [2024-12-16 11:41:12.736080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.482 [2024-12-16 11:41:12.736088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.482 [2024-12-16 11:41:12.736097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.482 [2024-12-16 11:41:12.736105] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.482 [2024-12-16 11:41:12.736110] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.482 [2024-12-16 11:41:12.736115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.482 [2024-12-16 11:41:12.740321] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.482 [2024-12-16 11:41:12.740344] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.482 [2024-12-16 11:41:12.740351] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.740356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.482 [2024-12-16 11:41:12.740382] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.740438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.482 [2024-12-16 11:41:12.740457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.482 [2024-12-16 11:41:12.740466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.482 [2024-12-16 11:41:12.740481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.482 [2024-12-16 11:41:12.740495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.482 [2024-12-16 11:41:12.740503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.482 [2024-12-16 11:41:12.740512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.482 [2024-12-16 11:41:12.740519] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.482 [2024-12-16 11:41:12.740525] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.482 [2024-12-16 11:41:12.740530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.482 [2024-12-16 11:41:12.745973] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.482 [2024-12-16 11:41:12.745998] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.482 [2024-12-16 11:41:12.746020] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.746036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.482 [2024-12-16 11:41:12.746077] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.746128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.482 [2024-12-16 11:41:12.746147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.482 [2024-12-16 11:41:12.746157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.482 [2024-12-16 11:41:12.746172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.482 [2024-12-16 11:41:12.746185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.482 [2024-12-16 11:41:12.746193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.482 [2024-12-16 11:41:12.746202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.482 [2024-12-16 11:41:12.746210] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.482 [2024-12-16 11:41:12.746215] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.482 [2024-12-16 11:41:12.746219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.482 [2024-12-16 11:41:12.750393] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.482 [2024-12-16 11:41:12.750433] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.482 [2024-12-16 11:41:12.750456] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.750461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.482 [2024-12-16 11:41:12.750488] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.750541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.482 [2024-12-16 11:41:12.750561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.482 [2024-12-16 11:41:12.750571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.482 [2024-12-16 11:41:12.750587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.482 [2024-12-16 11:41:12.750617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.482 [2024-12-16 11:41:12.750627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.482 [2024-12-16 11:41:12.750636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.482 [2024-12-16 11:41:12.750644] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.482 [2024-12-16 11:41:12.750649] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.482 [2024-12-16 11:41:12.750654] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.482 [2024-12-16 11:41:12.756069] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.482 [2024-12-16 11:41:12.756133] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.482 [2024-12-16 11:41:12.756139] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.756144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.482 [2024-12-16 11:41:12.756184] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.482 [2024-12-16 11:41:12.756250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.482 [2024-12-16 11:41:12.756270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.482 [2024-12-16 11:41:12.756280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.482 [2024-12-16 11:41:12.756295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.482 [2024-12-16 11:41:12.756308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.482 [2024-12-16 11:41:12.756317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.482 [2024-12-16 11:41:12.756326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.482 [2024-12-16 11:41:12.756334] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.482 [2024-12-16 11:41:12.756339] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.482 [2024-12-16 11:41:12.756344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.482 [2024-12-16 11:41:12.760497] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.482 [2024-12-16 11:41:12.760518] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.483 [2024-12-16 11:41:12.760541] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.760546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.483 [2024-12-16 11:41:12.760572] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.760622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.483 [2024-12-16 11:41:12.760641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.483 [2024-12-16 11:41:12.760651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.483 [2024-12-16 11:41:12.760666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.483 [2024-12-16 11:41:12.760695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.483 [2024-12-16 11:41:12.760704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.483 [2024-12-16 11:41:12.760713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.483 [2024-12-16 11:41:12.760721] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.483 [2024-12-16 11:41:12.760726] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.483 [2024-12-16 11:41:12.760731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.483 [2024-12-16 11:41:12.766181] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.483 [2024-12-16 11:41:12.766205] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.483 [2024-12-16 11:41:12.766212] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.766217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.483 [2024-12-16 11:41:12.766243] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.766294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.483 [2024-12-16 11:41:12.766314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.483 [2024-12-16 11:41:12.766323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.483 [2024-12-16 11:41:12.766339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.483 [2024-12-16 11:41:12.766352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.483 [2024-12-16 11:41:12.766360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.483 [2024-12-16 11:41:12.766369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.483 [2024-12-16 11:41:12.766377] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.483 [2024-12-16 11:41:12.766383] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.483 [2024-12-16 11:41:12.766387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.483 [2024-12-16 11:41:12.770580] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.483 [2024-12-16 11:41:12.770601] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.483 [2024-12-16 11:41:12.770623] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.770628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.483 [2024-12-16 11:41:12.770669] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.770720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.483 [2024-12-16 11:41:12.770739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.483 [2024-12-16 11:41:12.770750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.483 [2024-12-16 11:41:12.770782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.483 [2024-12-16 11:41:12.770807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.483 [2024-12-16 11:41:12.770817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.483 [2024-12-16 11:41:12.770826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.483 [2024-12-16 11:41:12.770834] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.483 [2024-12-16 11:41:12.770839] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.483 [2024-12-16 11:41:12.770844] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.483 [2024-12-16 11:41:12.776269] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.483 [2024-12-16 11:41:12.776309] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.483 [2024-12-16 11:41:12.776315] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.776320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.483 [2024-12-16 11:41:12.776360] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.776408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.483 [2024-12-16 11:41:12.776427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.483 [2024-12-16 11:41:12.776436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.483 [2024-12-16 11:41:12.776451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.483 [2024-12-16 11:41:12.776465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.483 [2024-12-16 11:41:12.776473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.483 [2024-12-16 11:41:12.776482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.483 [2024-12-16 11:41:12.776490] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.483 [2024-12-16 11:41:12.776495] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.483 [2024-12-16 11:41:12.776500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.483 [2024-12-16 11:41:12.780679] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:07.483 [2024-12-16 11:41:12.780702] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:07.483 [2024-12-16 11:41:12.780708] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.780713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:07.483 [2024-12-16 11:41:12.780738] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.780803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.483 [2024-12-16 11:41:12.780823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175adc0 with addr=10.0.0.4, port=4420 00:21:07.483 [2024-12-16 11:41:12.780833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175adc0 is same with the state(6) to be set 00:21:07.483 [2024-12-16 11:41:12.780848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175adc0 (9): Bad file descriptor 00:21:07.483 [2024-12-16 11:41:12.780869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:07.483 [2024-12-16 11:41:12.780877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:07.483 [2024-12-16 11:41:12.780886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:07.483 [2024-12-16 11:41:12.780893] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:07.483 [2024-12-16 11:41:12.780899] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:07.483 [2024-12-16 11:41:12.780903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:07.483 [2024-12-16 11:41:12.786354] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:07.483 [2024-12-16 11:41:12.786377] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:07.483 [2024-12-16 11:41:12.786399] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.786404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.483 [2024-12-16 11:41:12.786428] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.483 [2024-12-16 11:41:12.786491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.483 [2024-12-16 11:41:12.786510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5790 with addr=10.0.0.3, port=4420 00:21:07.483 [2024-12-16 11:41:12.786520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5790 is same with the state(6) to be set 00:21:07.483 [2024-12-16 11:41:12.786535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e5790 (9): Bad file descriptor 00:21:07.483 [2024-12-16 11:41:12.786548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.483 [2024-12-16 11:41:12.786556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.483 [2024-12-16 11:41:12.786565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.483 [2024-12-16 11:41:12.786573] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.483 [2024-12-16 11:41:12.786579] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.483 [2024-12-16 11:41:12.786583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.483 [2024-12-16 11:41:12.788741] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:21:07.484 [2024-12-16 11:41:12.788773] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:07.484 [2024-12-16 11:41:12.788797] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:07.484 [2024-12-16 11:41:12.788831] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:21:07.484 [2024-12-16 11:41:12.788847] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:07.484 [2024-12-16 11:41:12.788861] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:07.484 [2024-12-16 11:41:12.874855] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:07.484 [2024-12-16 11:41:12.874938] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:08.418 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.676 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:21:08.676 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.677 11:41:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:21:08.677 [2024-12-16 11:41:14.003064] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:09.611 11:41:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:09.611 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.869 [2024-12-16 11:41:15.176327] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:09.869 2024/12/16 11:41:15 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:09.869 request: 00:21:09.869 { 00:21:09.869 "method": "bdev_nvme_start_mdns_discovery", 00:21:09.869 "params": { 00:21:09.869 "name": "mdns", 00:21:09.869 "svcname": "_nvme-disc._http", 00:21:09.869 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:09.869 } 00:21:09.869 } 00:21:09.869 Got JSON-RPC error response 00:21:09.869 GoRPCClient: error on JSON-RPC call 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:09.869 11:41:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:21:10.435 [2024-12-16 11:41:15.765029] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:10.693 [2024-12-16 11:41:15.865028] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:10.693 [2024-12-16 11:41:15.965065] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.693 [2024-12-16 11:41:15.965345] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:10.693 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.693 cookie is 0 00:21:10.693 is_local: 1 00:21:10.693 our_own: 0 00:21:10.693 wide_area: 0 00:21:10.693 multicast: 1 00:21:10.693 cached: 1 00:21:10.693 [2024-12-16 11:41:16.065045] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.693 [2024-12-16 11:41:16.065269] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:10.693 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.693 cookie is 0 00:21:10.693 is_local: 1 00:21:10.693 our_own: 0 00:21:10.693 wide_area: 0 00:21:10.693 multicast: 1 00:21:10.693 cached: 1 00:21:10.693 [2024-12-16 11:41:16.065412] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:10.952 [2024-12-16 11:41:16.165058] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.952 [2024-12-16 11:41:16.165281] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:10.952 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.952 cookie is 0 00:21:10.952 is_local: 1 00:21:10.952 our_own: 0 00:21:10.952 wide_area: 0 00:21:10.952 multicast: 1 00:21:10.952 cached: 1 00:21:10.952 [2024-12-16 11:41:16.265048] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:10.952 [2024-12-16 11:41:16.265338] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:10.952 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.952 cookie is 0 00:21:10.952 is_local: 1 00:21:10.952 our_own: 0 00:21:10.952 wide_area: 0 00:21:10.952 multicast: 1 00:21:10.952 cached: 1 00:21:10.952 [2024-12-16 11:41:16.265481] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:11.887 [2024-12-16 11:41:16.970332] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:11.887 [2024-12-16 11:41:16.970540] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:11.887 [2024-12-16 11:41:16.970603] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:11.887 [2024-12-16 11:41:17.056433] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:21:11.887 [2024-12-16 11:41:17.115066] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:21:11.887 [2024-12-16 11:41:17.115884] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x177a6e0:1 started. 00:21:11.887 [2024-12-16 11:41:17.117773] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:11.887 [2024-12-16 11:41:17.117940] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:11.887 [2024-12-16 11:41:17.119439] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x177a6e0 was disconnected and freed. delete nvme_qpair. 00:21:11.887 [2024-12-16 11:41:17.169894] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:11.887 [2024-12-16 11:41:17.169915] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:11.887 [2024-12-16 11:41:17.169932] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:11.887 [2024-12-16 11:41:17.256026] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:21:12.145 [2024-12-16 11:41:17.314573] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:21:12.145 [2024-12-16 11:41:17.315207] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x18a7860:1 started. 00:21:12.145 [2024-12-16 11:41:17.316786] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:12.145 [2024-12-16 11:41:17.316809] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:12.145 [2024-12-16 11:41:17.318979] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x18a7860 was disconnected and freed. delete nvme_qpair. 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:15.463 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.464 [2024-12-16 11:41:20.365914] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:15.464 2024/12/16 11:41:20 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:15.464 request: 00:21:15.464 { 00:21:15.464 "method": "bdev_nvme_start_mdns_discovery", 00:21:15.464 "params": { 00:21:15.464 "name": "cdc", 00:21:15.464 "svcname": "_nvme-disc._tcp", 00:21:15.464 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:15.464 } 00:21:15.464 } 00:21:15.464 Got JSON-RPC error response 00:21:15.464 GoRPCClient: error on JSON-RPC call 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:15.464 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:15.464 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:15.464 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:15.464 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:15.464 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:15.464 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:15.464 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.464 11:41:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:21:15.464 [2024-12-16 11:41:20.565138] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:16.399 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:16.399 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:16.399 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.399 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.400 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:21:16.400 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 96839 00:21:16.400 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 96839 00:21:16.400 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 96857 00:21:16.400 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:21:16.400 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.400 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:21:16.400 Got SIGTERM, quitting. 00:21:16.400 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:21:16.400 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:21:16.400 avahi-daemon 0.8 exiting. 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.658 rmmod nvme_tcp 00:21:16.658 rmmod nvme_fabrics 00:21:16.658 rmmod nvme_keyring 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 96804 ']' 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 96804 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 96804 ']' 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 96804 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96804 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:16.658 killing process with pid 96804 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96804' 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 96804 00:21:16.658 11:41:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 96804 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.916 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.175 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:21:17.175 00:21:17.175 real 0m21.805s 00:21:17.175 user 0m42.595s 00:21:17.175 sys 0m2.142s 00:21:17.175 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.175 11:41:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.175 ************************************ 00:21:17.175 END TEST nvmf_mdns_discovery 00:21:17.175 ************************************ 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.176 ************************************ 00:21:17.176 START TEST nvmf_host_multipath 00:21:17.176 ************************************ 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:17.176 * Looking for test storage... 00:21:17.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.176 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:17.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.436 --rc genhtml_branch_coverage=1 00:21:17.436 --rc genhtml_function_coverage=1 00:21:17.436 --rc genhtml_legend=1 00:21:17.436 --rc geninfo_all_blocks=1 00:21:17.436 --rc geninfo_unexecuted_blocks=1 00:21:17.436 00:21:17.436 ' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:17.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.436 --rc genhtml_branch_coverage=1 00:21:17.436 --rc genhtml_function_coverage=1 00:21:17.436 --rc genhtml_legend=1 00:21:17.436 --rc geninfo_all_blocks=1 00:21:17.436 --rc geninfo_unexecuted_blocks=1 00:21:17.436 00:21:17.436 ' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:17.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.436 --rc genhtml_branch_coverage=1 00:21:17.436 --rc genhtml_function_coverage=1 00:21:17.436 --rc genhtml_legend=1 00:21:17.436 --rc geninfo_all_blocks=1 00:21:17.436 --rc geninfo_unexecuted_blocks=1 00:21:17.436 00:21:17.436 ' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:17.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.436 --rc genhtml_branch_coverage=1 00:21:17.436 --rc genhtml_function_coverage=1 00:21:17.436 --rc genhtml_legend=1 00:21:17.436 --rc geninfo_all_blocks=1 00:21:17.436 --rc geninfo_unexecuted_blocks=1 00:21:17.436 00:21:17.436 ' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.436 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:17.437 Cannot find device "nvmf_init_br" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:17.437 Cannot find device "nvmf_init_br2" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:17.437 Cannot find device "nvmf_tgt_br" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.437 Cannot find device "nvmf_tgt_br2" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:17.437 Cannot find device "nvmf_init_br" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:17.437 Cannot find device "nvmf_init_br2" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:17.437 Cannot find device "nvmf_tgt_br" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:17.437 Cannot find device "nvmf_tgt_br2" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:17.437 Cannot find device "nvmf_br" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:17.437 Cannot find device "nvmf_init_if" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:17.437 Cannot find device "nvmf_init_if2" 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:17.437 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:17.697 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:17.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:17.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:21:17.697 00:21:17.697 --- 10.0.0.3 ping statistics --- 00:21:17.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.697 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:17.697 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:17.697 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:21:17.697 00:21:17.697 --- 10.0.0.4 ping statistics --- 00:21:17.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.697 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:17.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:21:17.697 00:21:17.697 --- 10.0.0.1 ping statistics --- 00:21:17.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.697 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:17.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:21:17.697 00:21:17.697 --- 10.0.0.2 ping statistics --- 00:21:17.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.697 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=97504 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 97504 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97504 ']' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.697 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:17.956 [2024-12-16 11:41:23.132759] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:21:17.956 [2024-12-16 11:41:23.132861] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.956 [2024-12-16 11:41:23.284295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:17.956 [2024-12-16 11:41:23.346685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.956 [2024-12-16 11:41:23.346752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.956 [2024-12-16 11:41:23.346766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.956 [2024-12-16 11:41:23.346776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.956 [2024-12-16 11:41:23.346786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.956 [2024-12-16 11:41:23.349288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.956 [2024-12-16 11:41:23.349308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=97504 00:21:18.215 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:18.474 [2024-12-16 11:41:23.818054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.474 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:18.733 Malloc0 00:21:18.733 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:19.300 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:19.562 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:19.828 [2024-12-16 11:41:25.019754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:19.828 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:20.100 [2024-12-16 11:41:25.299862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=97593 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 97593 /var/tmp/bdevperf.sock 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97593 ']' 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.100 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:20.359 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.359 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:20.359 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:20.617 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:21.186 Nvme0n1 00:21:21.186 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:21.753 Nvme0n1 00:21:21.753 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:21.753 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:22.687 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:22.687 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:22.945 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:23.203 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:23.203 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97668 00:21:23.203 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:23.203 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:29.767 Attaching 4 probes... 00:21:29.767 @path[10.0.0.3, 4421]: 16886 00:21:29.767 @path[10.0.0.3, 4421]: 17347 00:21:29.767 @path[10.0.0.3, 4421]: 17477 00:21:29.767 @path[10.0.0.3, 4421]: 17023 00:21:29.767 @path[10.0.0.3, 4421]: 17463 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97668 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:29.767 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:29.767 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:30.025 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:30.025 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97809 00:21:30.025 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:30.025 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:36.589 Attaching 4 probes... 00:21:36.589 @path[10.0.0.3, 4420]: 17245 00:21:36.589 @path[10.0.0.3, 4420]: 16865 00:21:36.589 @path[10.0.0.3, 4420]: 17579 00:21:36.589 @path[10.0.0.3, 4420]: 17765 00:21:36.589 @path[10.0.0.3, 4420]: 16972 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97809 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:36.589 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:36.848 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:37.106 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:37.106 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97940 00:21:37.106 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:37.106 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.670 Attaching 4 probes... 00:21:43.670 @path[10.0.0.3, 4421]: 14162 00:21:43.670 @path[10.0.0.3, 4421]: 18382 00:21:43.670 @path[10.0.0.3, 4421]: 18047 00:21:43.670 @path[10.0.0.3, 4421]: 18198 00:21:43.670 @path[10.0.0.3, 4421]: 17930 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97940 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:43.670 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:43.929 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:44.187 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:44.187 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:44.187 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98077 00:21:44.187 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.746 Attaching 4 probes... 00:21:50.746 00:21:50.746 00:21:50.746 00:21:50.746 00:21:50.746 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98077 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:50.746 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:50.747 11:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:51.005 11:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:51.005 11:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98208 00:21:51.005 11:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:51.005 11:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:57.576 Attaching 4 probes... 00:21:57.576 @path[10.0.0.3, 4421]: 17032 00:21:57.576 @path[10.0.0.3, 4421]: 16804 00:21:57.576 @path[10.0.0.3, 4421]: 16852 00:21:57.576 @path[10.0.0.3, 4421]: 17020 00:21:57.576 @path[10.0.0.3, 4421]: 17122 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98208 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:57.576 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:57.576 [2024-12-16 11:42:02.919769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.576 [2024-12-16 11:42:02.919954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.919961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.919970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.919979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.919987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.919995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.577 [2024-12-16 11:42:02.920532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 [2024-12-16 11:42:02.920931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9eb0 is same with the state(6) to be set 00:21:57.578 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:58.974 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:58.974 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98345 00:21:58.974 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:58.974 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:05.531 11:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:05.531 11:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:05.531 Attaching 4 probes... 00:22:05.531 @path[10.0.0.3, 4420]: 17348 00:22:05.531 @path[10.0.0.3, 4420]: 17884 00:22:05.531 @path[10.0.0.3, 4420]: 17833 00:22:05.531 @path[10.0.0.3, 4420]: 18125 00:22:05.531 @path[10.0.0.3, 4420]: 18244 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98345 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:05.531 [2024-12-16 11:42:10.520985] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:05.531 11:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:12.090 11:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:12.090 11:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98532 00:22:12.090 11:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:12.090 11:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:18.674 11:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:18.674 11:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.674 Attaching 4 probes... 00:22:18.674 @path[10.0.0.3, 4421]: 17455 00:22:18.674 @path[10.0.0.3, 4421]: 17849 00:22:18.674 @path[10.0.0.3, 4421]: 17745 00:22:18.674 @path[10.0.0.3, 4421]: 17764 00:22:18.674 @path[10.0.0.3, 4421]: 17771 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98532 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 97593 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97593 ']' 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97593 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97593 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:18.674 killing process with pid 97593 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97593' 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97593 00:22:18.674 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97593 00:22:18.674 { 00:22:18.674 "results": [ 00:22:18.674 { 00:22:18.674 "job": "Nvme0n1", 00:22:18.674 "core_mask": "0x4", 00:22:18.674 "workload": "verify", 00:22:18.674 "status": "terminated", 00:22:18.674 "verify_range": { 00:22:18.674 "start": 0, 00:22:18.674 "length": 16384 00:22:18.674 }, 00:22:18.675 "queue_depth": 128, 00:22:18.675 "io_size": 4096, 00:22:18.675 "runtime": 56.156984, 00:22:18.675 "iops": 7515.859469945893, 00:22:18.675 "mibps": 29.358826054476143, 00:22:18.675 "io_failed": 0, 00:22:18.675 "io_timeout": 0, 00:22:18.675 "avg_latency_us": 16997.188937930725, 00:22:18.675 "min_latency_us": 205.73090909090908, 00:22:18.675 "max_latency_us": 7046430.72 00:22:18.675 } 00:22:18.675 ], 00:22:18.675 "core_count": 1 00:22:18.675 } 00:22:18.675 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 97593 00:22:18.675 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:18.675 [2024-12-16 11:41:25.368402] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:22:18.675 [2024-12-16 11:41:25.368506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97593 ] 00:22:18.675 [2024-12-16 11:41:25.515312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.675 [2024-12-16 11:41:25.585593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.675 Running I/O for 90 seconds... 00:22:18.675 9008.00 IOPS, 35.19 MiB/s [2024-12-16T11:42:24.089Z] 9024.50 IOPS, 35.25 MiB/s [2024-12-16T11:42:24.089Z] 8919.33 IOPS, 34.84 MiB/s [2024-12-16T11:42:24.089Z] 8852.25 IOPS, 34.58 MiB/s [2024-12-16T11:42:24.089Z] 8833.20 IOPS, 34.50 MiB/s [2024-12-16T11:42:24.089Z] 8771.50 IOPS, 34.26 MiB/s [2024-12-16T11:42:24.089Z] 8767.57 IOPS, 34.25 MiB/s [2024-12-16T11:42:24.089Z] 8777.88 IOPS, 34.29 MiB/s [2024-12-16T11:42:24.089Z] [2024-12-16 11:41:35.397204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.397282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.397349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.397971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.397998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.398035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.675 [2024-12-16 11:41:35.398241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.398289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.398334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.398380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.398425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.398470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.398532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.398578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.398975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.675 [2024-12-16 11:41:35.399438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.675 [2024-12-16 11:41:35.399464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.399965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.399992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.400592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.400957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.676 [2024-12-16 11:41:35.400976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.676 [2024-12-16 11:41:35.401355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.676 [2024-12-16 11:41:35.401374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.401965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.401991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.402028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.402059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.402080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.402106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.402126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.402871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.402903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.402936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.402958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.402985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.677 [2024-12-16 11:41:35.403482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.403955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.403981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.404000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.404048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.404070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.677 [2024-12-16 11:41:35.404097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.677 [2024-12-16 11:41:35.404115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.404696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.404741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.404796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.404841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.404887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.404932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.404958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.404977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.678 [2024-12-16 11:41:35.405665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.405712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.405757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.405802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.405848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.405898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.405925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.405944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.406588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.678 [2024-12-16 11:41:35.406625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.678 [2024-12-16 11:41:35.406659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.406680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.406708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.406727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.406754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.406773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.406799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.406819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.406845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.406876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.406905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.406925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.406952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.406972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.406997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.407969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.407988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.408029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.408053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.408090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.408112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.408138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.408157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.408183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.408207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.408234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.679 [2024-12-16 11:41:35.408254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.408279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.679 [2024-12-16 11:41:35.408298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.679 [2024-12-16 11:41:35.408324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.680 [2024-12-16 11:41:35.408344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.680 [2024-12-16 11:41:35.408390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.680 [2024-12-16 11:41:35.408435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.680 [2024-12-16 11:41:35.408481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.680 [2024-12-16 11:41:35.408526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.680 [2024-12-16 11:41:35.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.680 [2024-12-16 11:41:35.408617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.408977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.408996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.409652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.409672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.680 [2024-12-16 11:41:35.410966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.680 [2024-12-16 11:41:35.410993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.411030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.411081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.411977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.411996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.412304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.412350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.412419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.412465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.412509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.681 [2024-12-16 11:41:35.412553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.681 [2024-12-16 11:41:35.412901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.681 [2024-12-16 11:41:35.412926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.682 [2024-12-16 11:41:35.412944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.412978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.682 [2024-12-16 11:41:35.412997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.413053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.682 [2024-12-16 11:41:35.413076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.413103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.682 [2024-12-16 11:41:35.413122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.413149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.682 [2024-12-16 11:41:35.413168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.413193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.682 [2024-12-16 11:41:35.425069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.425165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.682 [2024-12-16 11:41:35.425198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.425237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.425263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.425299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.425324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.425360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.425385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.425421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.425447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.425491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.425516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.426524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.426569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.426636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.426666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.426702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.426728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.426764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.426789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.426824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.426850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.426885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.426911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.426947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.426973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.427954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.427990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.428035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.428075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.428102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.428137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.428163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.428198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.428258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.428298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.428324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.682 [2024-12-16 11:41:35.428359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.682 [2024-12-16 11:41:35.428385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.428445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.428506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.428566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.428627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.428688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.428749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.428810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.428871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.428932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.428967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.428993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.429091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.429153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.429214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.429275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.683 [2024-12-16 11:41:35.429336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.429963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.429998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.683 [2024-12-16 11:41:35.430614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.683 [2024-12-16 11:41:35.430652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.430679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.431718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.431761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.431806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.431835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.431871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.431896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.431932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.431957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.431992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.432626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.432687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.432748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.432809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.432870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.432931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.432966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.432991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.433951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.433998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.434047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.434086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.434112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.434147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.684 [2024-12-16 11:41:35.434173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.434208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.684 [2024-12-16 11:41:35.434234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.684 [2024-12-16 11:41:35.434269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.434294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.434416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.434476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.434538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.434598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.434659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.434720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.434792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.434856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.434917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.434952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.434978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.685 [2024-12-16 11:41:35.435516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.435590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.435653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.435725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.435751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.435775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.436959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.436985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.437018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.437052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.437073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.437099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.437118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.437143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.437162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.685 [2024-12-16 11:41:35.437187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.685 [2024-12-16 11:41:35.437206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.437980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.686 [2024-12-16 11:41:35.438510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.438965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.438992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.686 [2024-12-16 11:41:35.439027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.686 [2024-12-16 11:41:35.439057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.439445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.439464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.687 [2024-12-16 11:41:35.440872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.440917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.440961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.440987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.687 [2024-12-16 11:41:35.441453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.687 [2024-12-16 11:41:35.441479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.441955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.441981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.442071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.442117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.442161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.442221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.442268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.442313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.442977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.688 [2024-12-16 11:41:35.442996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.443040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.443062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.443088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.443107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.443134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.443153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.443822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.443854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.443886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.443908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.443935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.443954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.688 [2024-12-16 11:41:35.443979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.688 [2024-12-16 11:41:35.443998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.444965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.444984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.689 [2024-12-16 11:41:35.445551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.689 [2024-12-16 11:41:35.445595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.689 [2024-12-16 11:41:35.445640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.689 [2024-12-16 11:41:35.445684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.689 [2024-12-16 11:41:35.445729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.689 [2024-12-16 11:41:35.445785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.689 [2024-12-16 11:41:35.445829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.689 [2024-12-16 11:41:35.445855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.689 [2024-12-16 11:41:35.445874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.445900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.690 [2024-12-16 11:41:35.445918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.445943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.445962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.445988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.446781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.446800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.447973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.447998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.448036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.448131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.448175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.448227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.448281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.690 [2024-12-16 11:41:35.448326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.690 [2024-12-16 11:41:35.448370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.690 [2024-12-16 11:41:35.448416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.690 [2024-12-16 11:41:35.448460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.690 [2024-12-16 11:41:35.448486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.690 [2024-12-16 11:41:35.448505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.448964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.448990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.691 [2024-12-16 11:41:35.449488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.691 [2024-12-16 11:41:35.449533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.691 [2024-12-16 11:41:35.449578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.691 [2024-12-16 11:41:35.449622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.691 [2024-12-16 11:41:35.449666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.691 [2024-12-16 11:41:35.449711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.449969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.449988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.691 [2024-12-16 11:41:35.450355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.691 [2024-12-16 11:41:35.450374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.450400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.692 [2024-12-16 11:41:35.450418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.450443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.450462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.450489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.450519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.451971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.451996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.692 [2024-12-16 11:41:35.452640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.692 [2024-12-16 11:41:35.452666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.452685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.452711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.452730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.452756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.452775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.452800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.452820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.452845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.452864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.452890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.452917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.452945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.452964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.452990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.693 [2024-12-16 11:41:35.453340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.453956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.453975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.454001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.454037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.454075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.454096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.454122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.454140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.454167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.454186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.454886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.454917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.454949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.454970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.454997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.455036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.455066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.455086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.455112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.455131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.455157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.455176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.455201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.693 [2024-12-16 11:41:35.455234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.693 [2024-12-16 11:41:35.455263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.455702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.455748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.455793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.455837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.455882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.455936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.455963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.455983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.694 [2024-12-16 11:41:35.456834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.456879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.456924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.456969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.456995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.457029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.457058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.457086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.694 [2024-12-16 11:41:35.457115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.694 [2024-12-16 11:41:35.457134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.695 [2024-12-16 11:41:35.457843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.457869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.457888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.458751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.458785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.458845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.458871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.458898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.458918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.458943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.458963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.458989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.695 [2024-12-16 11:41:35.459912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.695 [2024-12-16 11:41:35.459930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.459956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.459975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.460633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.460677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.460723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.460768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.460813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.460858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.460914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.460960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.460986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.696 [2024-12-16 11:41:35.461017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.696 [2024-12-16 11:41:35.461570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.696 [2024-12-16 11:41:35.461595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:35.461614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:35.461639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:35.461658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:35.461684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:35.461703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:35.461728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:35.461747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:35.461773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:35.461792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:35.463299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:35.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.697 8700.89 IOPS, 33.99 MiB/s [2024-12-16T11:42:24.111Z] 8699.10 IOPS, 33.98 MiB/s [2024-12-16T11:42:24.111Z] 8687.00 IOPS, 33.93 MiB/s [2024-12-16T11:42:24.111Z] 8690.00 IOPS, 33.95 MiB/s [2024-12-16T11:42:24.111Z] 8686.85 IOPS, 33.93 MiB/s [2024-12-16T11:42:24.111Z] 8693.00 IOPS, 33.96 MiB/s [2024-12-16T11:42:24.111Z] 8692.80 IOPS, 33.96 MiB/s [2024-12-16T11:42:24.111Z] [2024-12-16 11:41:42.110712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.697 [2024-12-16 11:41:42.110787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.110833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.110857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.110885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.110906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.110962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.110984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.111964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.111990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.697 [2024-12-16 11:41:42.112414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.697 [2024-12-16 11:41:42.112441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.112460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.114937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.114956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.116958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.116977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.117032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.117058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.117086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.117107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.117133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.117152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.117178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.698 [2024-12-16 11:41:42.117198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.117225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.698 [2024-12-16 11:41:42.117244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.698 [2024-12-16 11:41:42.117270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.698 [2024-12-16 11:41:42.117289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.699 [2024-12-16 11:41:42.117336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.699 [2024-12-16 11:41:42.117382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.699 [2024-12-16 11:41:42.117427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.699 [2024-12-16 11:41:42.117483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.699 [2024-12-16 11:41:42.117532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.117968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.117987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.118971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.118990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.699 [2024-12-16 11:41:42.119799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.699 [2024-12-16 11:41:42.119826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.119855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.119883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.119903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.119929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.119949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.119975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.119994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.700 [2024-12-16 11:41:42.120589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.120955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.120981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.700 [2024-12-16 11:41:42.121488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.700 [2024-12-16 11:41:42.121514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.121979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.121999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.122715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.122734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.123798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.123832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.123878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.123901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.123929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.123948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.123974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.123994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.701 [2024-12-16 11:41:42.124492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.701 [2024-12-16 11:41:42.124518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.124968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.124987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.702 [2024-12-16 11:41:42.125065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.702 [2024-12-16 11:41:42.125115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.702 [2024-12-16 11:41:42.125160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.702 [2024-12-16 11:41:42.125205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.702 [2024-12-16 11:41:42.125250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.702 [2024-12-16 11:41:42.125295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.702 [2024-12-16 11:41:42.125341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.125976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.125995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.126961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.702 [2024-12-16 11:41:42.126980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.702 [2024-12-16 11:41:42.127023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.127953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.127979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.703 [2024-12-16 11:41:42.128392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.703 [2024-12-16 11:41:42.128881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.703 [2024-12-16 11:41:42.128900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.128926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.128945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.128971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.128990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.129963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.129990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.130489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.130508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.131564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.131618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.131664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.131710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.131761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.131816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.704 [2024-12-16 11:41:42.131862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.704 [2024-12-16 11:41:42.131888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.131906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.131933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.131952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.131994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.132038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.132096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.132117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.132144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.132170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.132197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.132216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.144594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.144650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.144695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.144723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.144761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.144788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.144836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.144863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.144901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.144928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.144965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.144992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.145089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.145167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.145255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.145323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.145388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.145462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.145527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.705 [2024-12-16 11:41:42.145591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.705 [2024-12-16 11:41:42.145657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.705 [2024-12-16 11:41:42.145722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.705 [2024-12-16 11:41:42.145786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.705 [2024-12-16 11:41:42.145854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.705 [2024-12-16 11:41:42.145918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.145955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.705 [2024-12-16 11:41:42.145982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.705 [2024-12-16 11:41:42.146609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.705 [2024-12-16 11:41:42.146646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.146672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.146710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.146747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.146784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.146811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.146849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.146877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.147944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.147992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.148955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.148982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.149964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.149991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.150050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.150079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.150123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.150149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.150187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.150213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.150251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.150277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.150314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.150340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.150377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.706 [2024-12-16 11:41:42.150404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.706 [2024-12-16 11:41:42.150441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.707 [2024-12-16 11:41:42.150547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.150973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.150999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.151957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.151983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.152956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.152993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.153045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.153100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.153130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.153168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.707 [2024-12-16 11:41:42.153195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.707 [2024-12-16 11:41:42.153231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.153258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.153295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.153323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.153361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.153387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.153424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.153455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.153493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.153520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.154972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.155961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.155987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.708 [2024-12-16 11:41:42.156541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.708 [2024-12-16 11:41:42.156587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.708 [2024-12-16 11:41:42.156640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.708 [2024-12-16 11:41:42.156688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.708 [2024-12-16 11:41:42.156733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.708 [2024-12-16 11:41:42.156777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.708 [2024-12-16 11:41:42.156830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.708 [2024-12-16 11:41:42.156875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.708 [2024-12-16 11:41:42.156900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.156919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.156945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.156964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.156990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.157402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.157999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.158963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.158995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.159041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.159063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.159088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.159108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.159134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.709 [2024-12-16 11:41:42.159153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.709 [2024-12-16 11:41:42.159179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.710 [2024-12-16 11:41:42.159856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.159971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.159990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.160976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.160995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.161042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.710 [2024-12-16 11:41:42.161062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.710 [2024-12-16 11:41:42.161089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.161829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.161849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.162840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.162873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.162917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.162940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.162967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.162986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.711 [2024-12-16 11:41:42.163941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.711 [2024-12-16 11:41:42.163966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.163985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.712 [2024-12-16 11:41:42.164252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.712 [2024-12-16 11:41:42.164296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.712 [2024-12-16 11:41:42.164341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.712 [2024-12-16 11:41:42.164385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.712 [2024-12-16 11:41:42.164429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.712 [2024-12-16 11:41:42.164474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.712 [2024-12-16 11:41:42.164518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.164957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.164976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.165967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.165986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.712 [2024-12-16 11:41:42.166428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.712 [2024-12-16 11:41:42.166456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.166954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.166973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.713 [2024-12-16 11:41:42.167518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.167958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.167977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.168003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.168038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.168067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.168087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.168113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.713 [2024-12-16 11:41:42.168132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.713 [2024-12-16 11:41:42.168158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.168969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.168995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.169957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.169989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.714 [2024-12-16 11:41:42.170449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.714 [2024-12-16 11:41:42.170479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.170953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.170986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.715 [2024-12-16 11:41:42.171409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.715 [2024-12-16 11:41:42.171458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.715 [2024-12-16 11:41:42.171508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.715 [2024-12-16 11:41:42.171559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.715 [2024-12-16 11:41:42.171608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.715 [2024-12-16 11:41:42.171673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.715 [2024-12-16 11:41:42.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.715 [2024-12-16 11:41:42.171910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.715 [2024-12-16 11:41:42.171929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:42.171960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:42.171979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:42.172022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:42.172046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:42.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:42.172098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:42.172129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:42.172148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:42.172178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:42.172197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:42.172228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:42.172248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:42.172400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:42.172434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.716 8197.94 IOPS, 32.02 MiB/s [2024-12-16T11:42:24.130Z] 8150.88 IOPS, 31.84 MiB/s [2024-12-16T11:42:24.130Z] 8209.06 IOPS, 32.07 MiB/s [2024-12-16T11:42:24.130Z] 8254.53 IOPS, 32.24 MiB/s [2024-12-16T11:42:24.130Z] 8297.00 IOPS, 32.41 MiB/s [2024-12-16T11:42:24.130Z] 8329.57 IOPS, 32.54 MiB/s [2024-12-16T11:42:24.130Z] 8358.59 IOPS, 32.65 MiB/s [2024-12-16T11:42:24.130Z] [2024-12-16 11:41:49.380233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.380972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.380993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.381965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.381991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.716 [2024-12-16 11:41:49.382418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.716 [2024-12-16 11:41:49.382446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.382972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.382991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.383958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.383986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.717 [2024-12-16 11:41:49.384018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.717 [2024-12-16 11:41:49.384400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.717 [2024-12-16 11:41:49.384427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.384462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.384499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.384519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.384546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.384565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.384591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.384610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.384637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.384655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.384682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.384720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.384991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.718 [2024-12-16 11:41:49.385492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.385961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.385991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.718 [2024-12-16 11:41:49.386443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.718 [2024-12-16 11:41:49.386474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:41:49.386493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:41:49.386543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:41:49.386607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:41:49.386656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:41:49.386729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:41:49.386793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:41:49.386842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.386892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.386942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.386972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.386991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:41:49.387706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:41:49.387725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.719 8128.57 IOPS, 31.75 MiB/s [2024-12-16T11:42:24.133Z] 7789.88 IOPS, 30.43 MiB/s [2024-12-16T11:42:24.133Z] 7478.28 IOPS, 29.21 MiB/s [2024-12-16T11:42:24.133Z] 7190.65 IOPS, 28.09 MiB/s [2024-12-16T11:42:24.133Z] 6924.33 IOPS, 27.05 MiB/s [2024-12-16T11:42:24.133Z] 6677.04 IOPS, 26.08 MiB/s [2024-12-16T11:42:24.133Z] 6446.79 IOPS, 25.18 MiB/s [2024-12-16T11:42:24.133Z] 6408.13 IOPS, 25.03 MiB/s [2024-12-16T11:42:24.133Z] 6479.55 IOPS, 25.31 MiB/s [2024-12-16T11:42:24.133Z] 6539.97 IOPS, 25.55 MiB/s [2024-12-16T11:42:24.133Z] 6598.39 IOPS, 25.77 MiB/s [2024-12-16T11:42:24.133Z] 6656.71 IOPS, 26.00 MiB/s [2024-12-16T11:42:24.133Z] 6709.00 IOPS, 26.21 MiB/s [2024-12-16T11:42:24.133Z] [2024-12-16 11:42:02.919711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.919777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.919908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.919942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.919972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.919992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.920094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.920141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.920642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.920692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.920734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.920778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.719 [2024-12-16 11:42:02.920819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:42:02.920861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:42:02.920903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:42:02.920945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.920969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:42:02.920986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.921010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.719 [2024-12-16 11:42:02.921045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.719 [2024-12-16 11:42:02.921085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.921969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.921987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.720 [2024-12-16 11:42:02.922956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.720 [2024-12-16 11:42:02.922974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.922998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.923424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.923442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.925262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.721 [2024-12-16 11:42:02.925296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.925316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.721 [2024-12-16 11:42:02.925333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.925365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.721 [2024-12-16 11:42:02.925385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.925402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.721 [2024-12-16 11:42:02.925419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.925438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.925455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.925503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0d90 is same with the state(6) to be set 00:22:18.721 [2024-12-16 11:42:02.925999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.721 [2024-12-16 11:42:02.926050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.926979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.926999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.721 [2024-12-16 11:42:02.927034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.721 [2024-12-16 11:42:02.927056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.722 [2024-12-16 11:42:02.927395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.927970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.927986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.722 [2024-12-16 11:42:02.928397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.722 [2024-12-16 11:42:02.928414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.928980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.928998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.723 [2024-12-16 11:42:02.929451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.723 [2024-12-16 11:42:02.929571] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929630] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929673] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929713] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929752] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929792] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929832] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929871] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929910] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929949] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.929988] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930048] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930092] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930132] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930171] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930211] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930250] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930290] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930329] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930368] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930408] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930447] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930486] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930526] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930565] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930604] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930643] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930697] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930739] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930779] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930818] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930858] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930897] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930943] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.930983] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.931039] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.931085] bdev_nvme.c:8350:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:22:18.723 [2024-12-16 11:42:02.932165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:18.723 [2024-12-16 11:42:02.932231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f0d90 (9): Bad file descriptor 00:22:18.723 [2024-12-16 11:42:02.932741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.723 [2024-12-16 11:42:02.932782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f0d90 with addr=10.0.0.3, port=4421 00:22:18.724 [2024-12-16 11:42:02.932805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f0d90 is same with the state(6) to be set 00:22:18.724 [2024-12-16 11:42:02.933170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f0d90 (9): Bad file descriptor 00:22:18.724 [2024-12-16 11:42:02.933433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:18.724 [2024-12-16 11:42:02.933464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:18.724 [2024-12-16 11:42:02.933483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:18.724 [2024-12-16 11:42:02.933516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:18.724 [2024-12-16 11:42:02.933547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:18.724 6750.72 IOPS, 26.37 MiB/s [2024-12-16T11:42:24.138Z] 6790.59 IOPS, 26.53 MiB/s [2024-12-16T11:42:24.138Z] 6838.50 IOPS, 26.71 MiB/s [2024-12-16T11:42:24.138Z] 6892.90 IOPS, 26.93 MiB/s [2024-12-16T11:42:24.138Z] 6943.40 IOPS, 27.12 MiB/s [2024-12-16T11:42:24.138Z] 6993.73 IOPS, 27.32 MiB/s [2024-12-16T11:42:24.138Z] 7041.67 IOPS, 27.51 MiB/s [2024-12-16T11:42:24.138Z] 7089.05 IOPS, 27.69 MiB/s [2024-12-16T11:42:24.138Z] 7128.14 IOPS, 27.84 MiB/s [2024-12-16T11:42:24.138Z] 7177.11 IOPS, 28.04 MiB/s [2024-12-16T11:42:24.138Z] [2024-12-16 11:42:13.000428] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:18.724 7220.09 IOPS, 28.20 MiB/s [2024-12-16T11:42:24.138Z] 7259.81 IOPS, 28.36 MiB/s [2024-12-16T11:42:24.138Z] 7293.25 IOPS, 28.49 MiB/s [2024-12-16T11:42:24.138Z] 7322.43 IOPS, 28.60 MiB/s [2024-12-16T11:42:24.138Z] 7351.92 IOPS, 28.72 MiB/s [2024-12-16T11:42:24.138Z] 7377.65 IOPS, 28.82 MiB/s [2024-12-16T11:42:24.138Z] 7406.67 IOPS, 28.93 MiB/s [2024-12-16T11:42:24.138Z] 7436.25 IOPS, 29.05 MiB/s [2024-12-16T11:42:24.138Z] 7461.43 IOPS, 29.15 MiB/s [2024-12-16T11:42:24.138Z] 7489.16 IOPS, 29.25 MiB/s [2024-12-16T11:42:24.138Z] 7516.23 IOPS, 29.36 MiB/s [2024-12-16T11:42:24.138Z] Received shutdown signal, test time was about 56.157839 seconds 00:22:18.724 00:22:18.724 Latency(us) 00:22:18.724 [2024-12-16T11:42:24.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.724 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.724 Verification LBA range: start 0x0 length 0x4000 00:22:18.724 Nvme0n1 : 56.16 7515.86 29.36 0.00 0.00 16997.19 205.73 7046430.72 00:22:18.724 [2024-12-16T11:42:24.138Z] =================================================================================================================== 00:22:18.724 [2024-12-16T11:42:24.138Z] Total : 7515.86 29.36 0.00 0.00 16997.19 205.73 7046430.72 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.724 rmmod nvme_tcp 00:22:18.724 rmmod nvme_fabrics 00:22:18.724 rmmod nvme_keyring 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 97504 ']' 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 97504 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97504 ']' 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97504 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97504 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97504' 00:22:18.724 killing process with pid 97504 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97504 00:22:18.724 11:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97504 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:18.724 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:18.982 00:22:18.982 real 1m1.857s 00:22:18.982 user 2m55.898s 00:22:18.982 sys 0m13.514s 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:18.982 ************************************ 00:22:18.982 END TEST nvmf_host_multipath 00:22:18.982 ************************************ 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.982 ************************************ 00:22:18.982 START TEST nvmf_timeout 00:22:18.982 ************************************ 00:22:18.982 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:19.240 * Looking for test storage... 00:22:19.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.240 --rc genhtml_branch_coverage=1 00:22:19.240 --rc genhtml_function_coverage=1 00:22:19.240 --rc genhtml_legend=1 00:22:19.240 --rc geninfo_all_blocks=1 00:22:19.240 --rc geninfo_unexecuted_blocks=1 00:22:19.240 00:22:19.240 ' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.240 --rc genhtml_branch_coverage=1 00:22:19.240 --rc genhtml_function_coverage=1 00:22:19.240 --rc genhtml_legend=1 00:22:19.240 --rc geninfo_all_blocks=1 00:22:19.240 --rc geninfo_unexecuted_blocks=1 00:22:19.240 00:22:19.240 ' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.240 --rc genhtml_branch_coverage=1 00:22:19.240 --rc genhtml_function_coverage=1 00:22:19.240 --rc genhtml_legend=1 00:22:19.240 --rc geninfo_all_blocks=1 00:22:19.240 --rc geninfo_unexecuted_blocks=1 00:22:19.240 00:22:19.240 ' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.240 --rc genhtml_branch_coverage=1 00:22:19.240 --rc genhtml_function_coverage=1 00:22:19.240 --rc genhtml_legend=1 00:22:19.240 --rc geninfo_all_blocks=1 00:22:19.240 --rc geninfo_unexecuted_blocks=1 00:22:19.240 00:22:19.240 ' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.240 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.241 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:19.241 Cannot find device "nvmf_init_br" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:19.241 Cannot find device "nvmf_init_br2" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:19.241 Cannot find device "nvmf_tgt_br" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.241 Cannot find device "nvmf_tgt_br2" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:19.241 Cannot find device "nvmf_init_br" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:19.241 Cannot find device "nvmf_init_br2" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:19.241 Cannot find device "nvmf_tgt_br" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:19.241 Cannot find device "nvmf_tgt_br2" 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:19.241 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:19.499 Cannot find device "nvmf_br" 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:19.499 Cannot find device "nvmf_init_if" 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:19.499 Cannot find device "nvmf_init_if2" 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:19.499 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.500 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:19.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:19.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:19.758 00:22:19.758 --- 10.0.0.3 ping statistics --- 00:22:19.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.758 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:19.758 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:19.758 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:22:19.758 00:22:19.758 --- 10.0.0.4 ping statistics --- 00:22:19.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.758 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:19.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:19.758 00:22:19.758 --- 10.0.0.1 ping statistics --- 00:22:19.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.758 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:19.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:22:19.758 00:22:19.758 --- 10.0.0.2 ping statistics --- 00:22:19.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.758 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.758 11:42:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=98908 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 98908 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98908 ']' 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:19.758 11:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:19.758 [2024-12-16 11:42:25.069832] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:22:19.758 [2024-12-16 11:42:25.069981] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.015 [2024-12-16 11:42:25.215561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:20.015 [2024-12-16 11:42:25.280582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.015 [2024-12-16 11:42:25.280647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.015 [2024-12-16 11:42:25.280675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.015 [2024-12-16 11:42:25.280683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.015 [2024-12-16 11:42:25.280690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.015 [2024-12-16 11:42:25.281893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.015 [2024-12-16 11:42:25.281902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.977 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:21.243 [2024-12-16 11:42:26.422681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.243 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:21.501 Malloc0 00:22:21.501 11:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.759 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:22.325 [2024-12-16 11:42:27.700533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=99005 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 99005 /var/tmp/bdevperf.sock 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99005 ']' 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.325 11:42:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.584 [2024-12-16 11:42:27.789268] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:22:22.584 [2024-12-16 11:42:27.789386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99005 ] 00:22:22.584 [2024-12-16 11:42:27.926569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.584 [2024-12-16 11:42:27.984835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.843 11:42:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.843 11:42:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:22.843 11:42:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:23.101 11:42:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:23.359 NVMe0n1 00:22:23.359 11:42:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=99039 00:22:23.359 11:42:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.359 11:42:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:23.616 Running I/O for 10 seconds... 00:22:24.549 11:42:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:24.807 9277.00 IOPS, 36.24 MiB/s [2024-12-16T11:42:30.221Z] [2024-12-16 11:42:30.013567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.013992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.014000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.807 [2024-12-16 11:42:30.014053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7550 is same with the state(6) to be set 00:22:24.808 [2024-12-16 11:42:30.014951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.808 [2024-12-16 11:42:30.015494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.808 [2024-12-16 11:42:30.015507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.015985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.015995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.809 [2024-12-16 11:42:30.016456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.809 [2024-12-16 11:42:30.016466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.016488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.016509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.016537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.016985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.016995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.017033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.017074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.017097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.810 [2024-12-16 11:42:30.017336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.017359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.017382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.017406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.810 [2024-12-16 11:42:30.017435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.810 [2024-12-16 11:42:30.017445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.017977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.017989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.018000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.018012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.018022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.018046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.018060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.018079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.018089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.018102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.811 [2024-12-16 11:42:30.018113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.018146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:24.811 [2024-12-16 11:42:30.018160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:22:24.811 [2024-12-16 11:42:30.018171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.018186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:24.811 [2024-12-16 11:42:30.018196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:24.811 [2024-12-16 11:42:30.018205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:22:24.811 [2024-12-16 11:42:30.018214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.811 [2024-12-16 11:42:30.018533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:24.811 [2024-12-16 11:42:30.018629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1915f30 (9): Bad file descriptor 00:22:24.811 [2024-12-16 11:42:30.018746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.811 [2024-12-16 11:42:30.018770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1915f30 with addr=10.0.0.3, port=4420 00:22:24.811 [2024-12-16 11:42:30.018783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915f30 is same with the state(6) to be set 00:22:24.811 [2024-12-16 11:42:30.018804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1915f30 (9): Bad file descriptor 00:22:24.811 [2024-12-16 11:42:30.018823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:24.811 [2024-12-16 11:42:30.018834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:24.811 [2024-12-16 11:42:30.018847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:24.812 [2024-12-16 11:42:30.018859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:24.812 [2024-12-16 11:42:30.018871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:24.812 11:42:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:26.678 5528.00 IOPS, 21.59 MiB/s [2024-12-16T11:42:32.092Z] 3685.33 IOPS, 14.40 MiB/s [2024-12-16T11:42:32.092Z] [2024-12-16 11:42:32.019171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.678 [2024-12-16 11:42:32.019256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1915f30 with addr=10.0.0.3, port=4420 00:22:26.678 [2024-12-16 11:42:32.019278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915f30 is same with the state(6) to be set 00:22:26.678 [2024-12-16 11:42:32.019308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1915f30 (9): Bad file descriptor 00:22:26.678 [2024-12-16 11:42:32.019348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:26.678 [2024-12-16 11:42:32.019363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:26.678 [2024-12-16 11:42:32.019376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:26.678 [2024-12-16 11:42:32.019390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:26.678 [2024-12-16 11:42:32.019403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:26.678 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:26.678 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:26.678 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:27.244 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:27.244 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:27.244 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:27.244 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:27.503 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:27.503 11:42:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:28.439 2764.00 IOPS, 10.80 MiB/s [2024-12-16T11:42:34.112Z] 2211.20 IOPS, 8.64 MiB/s [2024-12-16T11:42:34.112Z] [2024-12-16 11:42:34.019577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.698 [2024-12-16 11:42:34.019712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1915f30 with addr=10.0.0.3, port=4420 00:22:28.698 [2024-12-16 11:42:34.019747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915f30 is same with the state(6) to be set 00:22:28.698 [2024-12-16 11:42:34.019775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1915f30 (9): Bad file descriptor 00:22:28.698 [2024-12-16 11:42:34.019798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:28.698 [2024-12-16 11:42:34.019810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:28.698 [2024-12-16 11:42:34.019823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:28.698 [2024-12-16 11:42:34.019839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:28.698 [2024-12-16 11:42:34.019851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:30.574 1842.67 IOPS, 7.20 MiB/s [2024-12-16T11:42:36.247Z] 1579.43 IOPS, 6.17 MiB/s [2024-12-16T11:42:36.247Z] [2024-12-16 11:42:36.019985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:30.833 [2024-12-16 11:42:36.020077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:30.833 [2024-12-16 11:42:36.020095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:30.833 [2024-12-16 11:42:36.020107] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:30.833 [2024-12-16 11:42:36.020122] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:31.784 1382.00 IOPS, 5.40 MiB/s 00:22:31.784 Latency(us) 00:22:31.784 [2024-12-16T11:42:37.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.784 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.784 Verification LBA range: start 0x0 length 0x4000 00:22:31.784 NVMe0n1 : 8.19 1350.48 5.28 15.64 0.00 93529.22 2159.71 7015926.69 00:22:31.784 [2024-12-16T11:42:37.198Z] =================================================================================================================== 00:22:31.784 [2024-12-16T11:42:37.198Z] Total : 1350.48 5.28 15.64 0.00 93529.22 2159.71 7015926.69 00:22:31.784 { 00:22:31.784 "results": [ 00:22:31.784 { 00:22:31.784 "job": "NVMe0n1", 00:22:31.784 "core_mask": "0x4", 00:22:31.784 "workload": "verify", 00:22:31.784 "status": "finished", 00:22:31.784 "verify_range": { 00:22:31.784 "start": 0, 00:22:31.784 "length": 16384 00:22:31.784 }, 00:22:31.784 "queue_depth": 128, 00:22:31.784 "io_size": 4096, 00:22:31.784 "runtime": 8.186749, 00:22:31.784 "iops": 1350.4750176168832, 00:22:31.784 "mibps": 5.27529303756595, 00:22:31.784 "io_failed": 128, 00:22:31.784 "io_timeout": 0, 00:22:31.784 "avg_latency_us": 93529.22477305241, 00:22:31.784 "min_latency_us": 2159.7090909090907, 00:22:31.784 "max_latency_us": 7015926.69090909 00:22:31.784 } 00:22:31.784 ], 00:22:31.784 "core_count": 1 00:22:31.784 } 00:22:32.350 11:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:32.350 11:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.350 11:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:32.609 11:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:32.609 11:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:32.609 11:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:32.609 11:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 99039 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 99005 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99005 ']' 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99005 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99005 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:33.176 killing process with pid 99005 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99005' 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99005 00:22:33.176 Received shutdown signal, test time was about 9.498827 seconds 00:22:33.176 00:22:33.176 Latency(us) 00:22:33.176 [2024-12-16T11:42:38.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.176 [2024-12-16T11:42:38.590Z] =================================================================================================================== 00:22:33.176 [2024-12-16T11:42:38.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99005 00:22:33.176 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:33.434 [2024-12-16 11:42:38.761600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:33.434 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:33.434 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=99199 00:22:33.434 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 99199 /var/tmp/bdevperf.sock 00:22:33.435 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99199 ']' 00:22:33.435 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.435 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.435 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.435 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.435 11:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:33.435 [2024-12-16 11:42:38.828735] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:22:33.435 [2024-12-16 11:42:38.828860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99199 ] 00:22:33.694 [2024-12-16 11:42:38.974591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.694 [2024-12-16 11:42:39.030924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.952 11:42:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.952 11:42:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:33.952 11:42:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:34.211 11:42:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:34.470 NVMe0n1 00:22:34.470 11:42:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=99233 00:22:34.470 11:42:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:34.470 11:42:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:34.470 Running I/O for 10 seconds... 00:22:35.406 11:42:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:35.668 9315.00 IOPS, 36.39 MiB/s [2024-12-16T11:42:41.082Z] [2024-12-16 11:42:41.049179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.049734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f850 is same with the state(6) to be set 00:22:35.668 [2024-12-16 11:42:41.050673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.668 [2024-12-16 11:42:41.050719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.668 [2024-12-16 11:42:41.050745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.668 [2024-12-16 11:42:41.050758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.668 [2024-12-16 11:42:41.050772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.668 [2024-12-16 11:42:41.050783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.668 [2024-12-16 11:42:41.050797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.668 [2024-12-16 11:42:41.050808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.668 [2024-12-16 11:42:41.050821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.668 [2024-12-16 11:42:41.050831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.668 [2024-12-16 11:42:41.050844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.668 [2024-12-16 11:42:41.050854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.668 [2024-12-16 11:42:41.050867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.668 [2024-12-16 11:42:41.050877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.050890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.050900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.050913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.050932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.050945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.050955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.050967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.050978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.050991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.669 [2024-12-16 11:42:41.051790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.669 [2024-12-16 11:42:41.051824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.669 [2024-12-16 11:42:41.051847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.669 [2024-12-16 11:42:41.051870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.669 [2024-12-16 11:42:41.051883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.051893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.051906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.051916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.051929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.051939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.051951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.051962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.051974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.051984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.051997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.670 [2024-12-16 11:42:41.052400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.670 [2024-12-16 11:42:41.052847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.670 [2024-12-16 11:42:41.052860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.052870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.052883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.052893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.052906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.052916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.052928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.052939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.052957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.052968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.052981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.052992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.671 [2024-12-16 11:42:41.053414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.671 [2024-12-16 11:42:41.053436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.671 [2024-12-16 11:42:41.053459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.671 [2024-12-16 11:42:41.053482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.671 [2024-12-16 11:42:41.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.671 [2024-12-16 11:42:41.053532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.671 [2024-12-16 11:42:41.053555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.671 [2024-12-16 11:42:41.053793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.053824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.671 [2024-12-16 11:42:41.053836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.671 [2024-12-16 11:42:41.053846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0 00:22:35.671 [2024-12-16 11:42:41.053857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.671 [2024-12-16 11:42:41.054180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:35.672 [2024-12-16 11:42:41.054279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:35.672 [2024-12-16 11:42:41.054404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.672 [2024-12-16 11:42:41.054437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce5f30 with addr=10.0.0.3, port=4420 00:22:35.672 [2024-12-16 11:42:41.054455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5f30 is same with the state(6) to be set 00:22:35.672 [2024-12-16 11:42:41.054479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:35.672 [2024-12-16 11:42:41.054499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:35.672 [2024-12-16 11:42:41.054511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:35.672 [2024-12-16 11:42:41.054523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:35.672 [2024-12-16 11:42:41.054535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:35.672 [2024-12-16 11:42:41.054548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:35.672 11:42:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:36.866 5535.50 IOPS, 21.62 MiB/s [2024-12-16T11:42:42.280Z] [2024-12-16 11:42:42.054740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.866 [2024-12-16 11:42:42.054827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce5f30 with addr=10.0.0.3, port=4420 00:22:36.866 [2024-12-16 11:42:42.054844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5f30 is same with the state(6) to be set 00:22:36.866 [2024-12-16 11:42:42.054873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:36.866 [2024-12-16 11:42:42.054894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:36.866 [2024-12-16 11:42:42.054905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:36.866 [2024-12-16 11:42:42.054916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:36.866 [2024-12-16 11:42:42.054929] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:36.866 [2024-12-16 11:42:42.054942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:36.866 11:42:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:37.124 [2024-12-16 11:42:42.346995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:37.124 11:42:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 99233 00:22:37.692 3690.33 IOPS, 14.42 MiB/s [2024-12-16T11:42:43.106Z] [2024-12-16 11:42:43.074258] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:39.563 2767.75 IOPS, 10.81 MiB/s [2024-12-16T11:42:45.915Z] 3718.40 IOPS, 14.53 MiB/s [2024-12-16T11:42:47.293Z] 4632.00 IOPS, 18.09 MiB/s [2024-12-16T11:42:48.228Z] 5321.57 IOPS, 20.79 MiB/s [2024-12-16T11:42:49.162Z] 5843.50 IOPS, 22.83 MiB/s [2024-12-16T11:42:50.097Z] 6236.56 IOPS, 24.36 MiB/s [2024-12-16T11:42:50.097Z] 6615.70 IOPS, 25.84 MiB/s 00:22:44.683 Latency(us) 00:22:44.683 [2024-12-16T11:42:50.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.683 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.683 Verification LBA range: start 0x0 length 0x4000 00:22:44.683 NVMe0n1 : 10.01 6617.41 25.85 0.00 0.00 19302.34 1683.08 3019898.88 00:22:44.683 [2024-12-16T11:42:50.097Z] =================================================================================================================== 00:22:44.683 [2024-12-16T11:42:50.097Z] Total : 6617.41 25.85 0.00 0.00 19302.34 1683.08 3019898.88 00:22:44.683 { 00:22:44.683 "results": [ 00:22:44.683 { 00:22:44.683 "job": "NVMe0n1", 00:22:44.683 "core_mask": "0x4", 00:22:44.683 "workload": "verify", 00:22:44.683 "status": "finished", 00:22:44.683 "verify_range": { 00:22:44.683 "start": 0, 00:22:44.683 "length": 16384 00:22:44.683 }, 00:22:44.683 "queue_depth": 128, 00:22:44.683 "io_size": 4096, 00:22:44.683 "runtime": 10.007242, 00:22:44.683 "iops": 6617.407673362951, 00:22:44.683 "mibps": 25.849248724074027, 00:22:44.683 "io_failed": 0, 00:22:44.683 "io_timeout": 0, 00:22:44.683 "avg_latency_us": 19302.340593430912, 00:22:44.683 "min_latency_us": 1683.0836363636363, 00:22:44.683 "max_latency_us": 3019898.88 00:22:44.683 } 00:22:44.683 ], 00:22:44.683 "core_count": 1 00:22:44.683 } 00:22:44.683 11:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=99350 00:22:44.683 11:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:44.683 11:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:44.683 Running I/O for 10 seconds... 00:22:45.618 11:42:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:45.879 9129.00 IOPS, 35.66 MiB/s [2024-12-16T11:42:51.293Z] [2024-12-16 11:42:51.199319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd40 is same with the state(6) to be set 00:22:45.879 [2024-12-16 11:42:51.199388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd40 is same with the state(6) to be set 00:22:45.879 [2024-12-16 11:42:51.199401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd40 is same with the state(6) to be set 00:22:45.879 [2024-12-16 11:42:51.199409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd40 is same with the state(6) to be set 00:22:45.879 [2024-12-16 11:42:51.199418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd40 is same with the state(6) to be set 00:22:45.879 [2024-12-16 11:42:51.199426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd40 is same with the state(6) to be set 00:22:45.879 [2024-12-16 11:42:51.199681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.879 [2024-12-16 11:42:51.199712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.879 [2024-12-16 11:42:51.199735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.879 [2024-12-16 11:42:51.199745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.199766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.199786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.199808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.199827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.199846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.199865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.199885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.199905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.199925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.199966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.199984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.199994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.880 [2024-12-16 11:42:51.200270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.880 [2024-12-16 11:42:51.200555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.880 [2024-12-16 11:42:51.200566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.200981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.200992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.201001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.201034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.201054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.201074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.881 [2024-12-16 11:42:51.201095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.881 [2024-12-16 11:42:51.201393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.881 [2024-12-16 11:42:51.201404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.882 [2024-12-16 11:42:51.201413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.201966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.201991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.882 [2024-12-16 11:42:51.202250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.882 [2024-12-16 11:42:51.202258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.883 [2024-12-16 11:42:51.202278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.883 [2024-12-16 11:42:51.202297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.883 [2024-12-16 11:42:51.202320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.883 [2024-12-16 11:42:51.202344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.883 [2024-12-16 11:42:51.202364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.883 [2024-12-16 11:42:51.202387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.883 [2024-12-16 11:42:51.202424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.883 [2024-12-16 11:42:51.202432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87400 len:8 PRP1 0x0 PRP2 0x0 00:22:45.883 [2024-12-16 11:42:51.202441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.883 [2024-12-16 11:42:51.202574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.883 [2024-12-16 11:42:51.202594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.883 [2024-12-16 11:42:51.202612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.883 [2024-12-16 11:42:51.202636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.883 [2024-12-16 11:42:51.202645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5f30 is same with the state(6) to be set 00:22:45.883 [2024-12-16 11:42:51.202863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:45.883 [2024-12-16 11:42:51.202896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:45.883 [2024-12-16 11:42:51.203033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.883 [2024-12-16 11:42:51.203058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce5f30 with addr=10.0.0.3, port=4420 00:22:45.883 [2024-12-16 11:42:51.203069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5f30 is same with the state(6) to be set 00:22:45.883 [2024-12-16 11:42:51.203088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:45.883 [2024-12-16 11:42:51.203104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:45.883 [2024-12-16 11:42:51.203113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:45.883 [2024-12-16 11:42:51.203125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:45.883 [2024-12-16 11:42:51.203135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:45.883 [2024-12-16 11:42:51.203146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:45.883 11:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:46.828 5399.00 IOPS, 21.09 MiB/s [2024-12-16T11:42:52.242Z] [2024-12-16 11:42:52.203272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.828 [2024-12-16 11:42:52.203390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce5f30 with addr=10.0.0.3, port=4420 00:22:46.828 [2024-12-16 11:42:52.203408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5f30 is same with the state(6) to be set 00:22:46.828 [2024-12-16 11:42:52.203435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:46.828 [2024-12-16 11:42:52.203454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:46.828 [2024-12-16 11:42:52.203473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:46.828 [2024-12-16 11:42:52.203485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:46.828 [2024-12-16 11:42:52.203496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:46.828 [2024-12-16 11:42:52.203508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:48.036 3599.33 IOPS, 14.06 MiB/s [2024-12-16T11:42:53.450Z] [2024-12-16 11:42:53.203709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.036 [2024-12-16 11:42:53.203823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce5f30 with addr=10.0.0.3, port=4420 00:22:48.036 [2024-12-16 11:42:53.203840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5f30 is same with the state(6) to be set 00:22:48.036 [2024-12-16 11:42:53.203869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:48.036 [2024-12-16 11:42:53.203889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:48.036 [2024-12-16 11:42:53.203900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:48.036 [2024-12-16 11:42:53.203912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:48.036 [2024-12-16 11:42:53.203924] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:48.036 [2024-12-16 11:42:53.203938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:48.971 2699.50 IOPS, 10.54 MiB/s [2024-12-16T11:42:54.385Z] [2024-12-16 11:42:54.207403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.971 [2024-12-16 11:42:54.207481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce5f30 with addr=10.0.0.3, port=4420 00:22:48.971 [2024-12-16 11:42:54.207498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5f30 is same with the state(6) to be set 00:22:48.971 [2024-12-16 11:42:54.207809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5f30 (9): Bad file descriptor 00:22:48.971 [2024-12-16 11:42:54.208076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:48.971 [2024-12-16 11:42:54.208099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:48.971 [2024-12-16 11:42:54.208112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:48.971 [2024-12-16 11:42:54.208124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:48.971 [2024-12-16 11:42:54.208136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:48.971 11:42:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:49.230 [2024-12-16 11:42:54.513193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:49.230 11:42:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 99350 00:22:50.055 2159.60 IOPS, 8.44 MiB/s [2024-12-16T11:42:55.469Z] [2024-12-16 11:42:55.234830] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:52.002 2993.33 IOPS, 11.69 MiB/s [2024-12-16T11:42:58.351Z] 3855.29 IOPS, 15.06 MiB/s [2024-12-16T11:42:59.287Z] 4517.75 IOPS, 17.65 MiB/s [2024-12-16T11:43:00.222Z] 5050.78 IOPS, 19.73 MiB/s [2024-12-16T11:43:00.222Z] 5451.90 IOPS, 21.30 MiB/s 00:22:54.808 Latency(us) 00:22:54.808 [2024-12-16T11:43:00.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.808 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.808 Verification LBA range: start 0x0 length 0x4000 00:22:54.808 NVMe0n1 : 10.02 5463.19 21.34 3693.57 0.00 13952.23 662.81 3019898.88 00:22:54.808 [2024-12-16T11:43:00.222Z] =================================================================================================================== 00:22:54.808 [2024-12-16T11:43:00.222Z] Total : 5463.19 21.34 3693.57 0.00 13952.23 0.00 3019898.88 00:22:54.808 { 00:22:54.808 "results": [ 00:22:54.808 { 00:22:54.808 "job": "NVMe0n1", 00:22:54.808 "core_mask": "0x4", 00:22:54.808 "workload": "verify", 00:22:54.808 "status": "finished", 00:22:54.808 "verify_range": { 00:22:54.808 "start": 0, 00:22:54.808 "length": 16384 00:22:54.808 }, 00:22:54.808 "queue_depth": 128, 00:22:54.808 "io_size": 4096, 00:22:54.808 "runtime": 10.016861, 00:22:54.808 "iops": 5463.18851784007, 00:22:54.808 "mibps": 21.340580147812773, 00:22:54.808 "io_failed": 36998, 00:22:54.808 "io_timeout": 0, 00:22:54.808 "avg_latency_us": 13952.230276229951, 00:22:54.808 "min_latency_us": 662.8072727272727, 00:22:54.808 "max_latency_us": 3019898.88 00:22:54.808 } 00:22:54.808 ], 00:22:54.808 "core_count": 1 00:22:54.808 } 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 99199 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99199 ']' 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99199 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99199 00:22:54.808 killing process with pid 99199 00:22:54.808 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.808 00:22:54.808 Latency(us) 00:22:54.808 [2024-12-16T11:43:00.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.808 [2024-12-16T11:43:00.222Z] =================================================================================================================== 00:22:54.808 [2024-12-16T11:43:00.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99199' 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99199 00:22:54.808 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99199 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=99475 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 99475 /var/tmp/bdevperf.sock 00:22:55.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99475 ']' 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.067 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.067 [2024-12-16 11:43:00.345368] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:22:55.067 [2024-12-16 11:43:00.345945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99475 ] 00:22:55.326 [2024-12-16 11:43:00.487110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.326 [2024-12-16 11:43:00.543794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.326 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.326 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:55.326 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=99485 00:22:55.326 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99475 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:55.326 11:43:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:55.892 11:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:56.150 NVMe0n1 00:22:56.150 11:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=99539 00:22:56.150 11:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:56.150 11:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:56.150 Running I/O for 10 seconds... 00:22:57.125 11:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:57.387 17936.00 IOPS, 70.06 MiB/s [2024-12-16T11:43:02.801Z] [2024-12-16 11:43:02.642518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.387 [2024-12-16 11:43:02.642988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.642996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110a90 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.388 [2024-12-16 11:43:02.643747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.388 [2024-12-16 11:43:02.643760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.388 [2024-12-16 11:43:02.643769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.388 [2024-12-16 11:43:02.643779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.388 [2024-12-16 11:43:02.643788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.388 [2024-12-16 11:43:02.643798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.388 [2024-12-16 11:43:02.643807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.388 [2024-12-16 11:43:02.643816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146df30 is same with the state(6) to be set 00:22:57.388 [2024-12-16 11:43:02.643871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.643892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.643911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.643921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.643932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.643941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.643952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.643961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.643981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.643991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.389 [2024-12-16 11:43:02.644645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.389 [2024-12-16 11:43:02.644653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.644984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.644993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.390 [2024-12-16 11:43:02.645448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.390 [2024-12-16 11:43:02.645460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.645983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.645994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.646003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.646024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.646033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.646050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.646059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.646070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.646079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.646090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.646099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.646110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.646118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.391 [2024-12-16 11:43:02.646129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.391 [2024-12-16 11:43:02.646138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.392 [2024-12-16 11:43:02.646454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d9340 is same with the state(6) to be set 00:22:57.392 [2024-12-16 11:43:02.646474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.392 [2024-12-16 11:43:02.646482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.392 [2024-12-16 11:43:02.646490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60264 len:8 PRP1 0x0 PRP2 0x0 00:22:57.392 [2024-12-16 11:43:02.646498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.392 [2024-12-16 11:43:02.646816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:57.392 [2024-12-16 11:43:02.646850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146df30 (9): Bad file descriptor 00:22:57.392 [2024-12-16 11:43:02.646957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:57.392 [2024-12-16 11:43:02.646979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146df30 with addr=10.0.0.3, port=4420 00:22:57.392 [2024-12-16 11:43:02.646989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146df30 is same with the state(6) to be set 00:22:57.392 [2024-12-16 11:43:02.647019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146df30 (9): Bad file descriptor 00:22:57.392 [2024-12-16 11:43:02.647038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:57.392 [2024-12-16 11:43:02.647047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:57.392 [2024-12-16 11:43:02.647058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:57.392 [2024-12-16 11:43:02.647068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:57.392 [2024-12-16 11:43:02.647078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:57.392 11:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 99539 00:22:59.264 10004.00 IOPS, 39.08 MiB/s [2024-12-16T11:43:04.678Z] 6669.33 IOPS, 26.05 MiB/s [2024-12-16T11:43:04.678Z] [2024-12-16 11:43:04.663400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.264 [2024-12-16 11:43:04.663475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146df30 with addr=10.0.0.3, port=4420 00:22:59.264 [2024-12-16 11:43:04.663492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146df30 is same with the state(6) to be set 00:22:59.264 [2024-12-16 11:43:04.663531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146df30 (9): Bad file descriptor 00:22:59.264 [2024-12-16 11:43:04.663553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:59.264 [2024-12-16 11:43:04.663563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:59.264 [2024-12-16 11:43:04.663574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:59.264 [2024-12-16 11:43:04.663585] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:59.264 [2024-12-16 11:43:04.663597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:01.137 5002.00 IOPS, 19.54 MiB/s [2024-12-16T11:43:06.810Z] 4001.60 IOPS, 15.63 MiB/s [2024-12-16T11:43:06.810Z] [2024-12-16 11:43:06.663775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.396 [2024-12-16 11:43:06.663835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146df30 with addr=10.0.0.3, port=4420 00:23:01.396 [2024-12-16 11:43:06.663852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146df30 is same with the state(6) to be set 00:23:01.396 [2024-12-16 11:43:06.663878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146df30 (9): Bad file descriptor 00:23:01.396 [2024-12-16 11:43:06.663898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:01.396 [2024-12-16 11:43:06.663908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:01.396 [2024-12-16 11:43:06.663919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:01.396 [2024-12-16 11:43:06.663930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:01.396 [2024-12-16 11:43:06.663940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:03.265 3334.67 IOPS, 13.03 MiB/s [2024-12-16T11:43:08.679Z] 2858.29 IOPS, 11.17 MiB/s [2024-12-16T11:43:08.679Z] [2024-12-16 11:43:08.664004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:03.265 [2024-12-16 11:43:08.664076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:03.265 [2024-12-16 11:43:08.664089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:03.265 [2024-12-16 11:43:08.664100] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:03.265 [2024-12-16 11:43:08.664112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:04.458 2501.00 IOPS, 9.77 MiB/s 00:23:04.458 Latency(us) 00:23:04.458 [2024-12-16T11:43:09.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.458 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:04.458 NVMe0n1 : 8.12 2463.47 9.62 15.76 0.00 51570.59 2278.87 7015926.69 00:23:04.458 [2024-12-16T11:43:09.872Z] =================================================================================================================== 00:23:04.458 [2024-12-16T11:43:09.872Z] Total : 2463.47 9.62 15.76 0.00 51570.59 2278.87 7015926.69 00:23:04.458 { 00:23:04.458 "results": [ 00:23:04.458 { 00:23:04.459 "job": "NVMe0n1", 00:23:04.459 "core_mask": "0x4", 00:23:04.459 "workload": "randread", 00:23:04.459 "status": "finished", 00:23:04.459 "queue_depth": 128, 00:23:04.459 "io_size": 4096, 00:23:04.459 "runtime": 8.121863, 00:23:04.459 "iops": 2463.4742053639666, 00:23:04.459 "mibps": 9.622946114702994, 00:23:04.459 "io_failed": 128, 00:23:04.459 "io_timeout": 0, 00:23:04.459 "avg_latency_us": 51570.58725322354, 00:23:04.459 "min_latency_us": 2278.8654545454547, 00:23:04.459 "max_latency_us": 7015926.69090909 00:23:04.459 } 00:23:04.459 ], 00:23:04.459 "core_count": 1 00:23:04.459 } 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:04.459 Attaching 5 probes... 00:23:04.459 1468.481732: reset bdev controller NVMe0 00:23:04.459 1468.562841: reconnect bdev controller NVMe0 00:23:04.459 3484.901091: reconnect delay bdev controller NVMe0 00:23:04.459 3484.923808: reconnect bdev controller NVMe0 00:23:04.459 5485.306267: reconnect delay bdev controller NVMe0 00:23:04.459 5485.327993: reconnect bdev controller NVMe0 00:23:04.459 7485.651008: reconnect delay bdev controller NVMe0 00:23:04.459 7485.675884: reconnect bdev controller NVMe0 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 99485 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 99475 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99475 ']' 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99475 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99475 00:23:04.459 killing process with pid 99475 00:23:04.459 Received shutdown signal, test time was about 8.196269 seconds 00:23:04.459 00:23:04.459 Latency(us) 00:23:04.459 [2024-12-16T11:43:09.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.459 [2024-12-16T11:43:09.873Z] =================================================================================================================== 00:23:04.459 [2024-12-16T11:43:09.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99475' 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99475 00:23:04.459 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99475 00:23:04.718 11:43:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.976 rmmod nvme_tcp 00:23:04.976 rmmod nvme_fabrics 00:23:04.976 rmmod nvme_keyring 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 98908 ']' 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 98908 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98908 ']' 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98908 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98908 00:23:04.976 killing process with pid 98908 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98908' 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98908 00:23:04.976 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98908 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:05.235 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:05.493 00:23:05.493 real 0m46.521s 00:23:05.493 user 2m16.345s 00:23:05.493 sys 0m4.879s 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:05.493 ************************************ 00:23:05.493 END TEST nvmf_timeout 00:23:05.493 ************************************ 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:05.493 00:23:05.493 real 5m42.820s 00:23:05.493 user 14m42.751s 00:23:05.493 sys 1m3.989s 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.493 11:43:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.493 ************************************ 00:23:05.493 END TEST nvmf_host 00:23:05.493 ************************************ 00:23:05.752 11:43:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:05.752 11:43:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:23:05.752 11:43:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:05.752 11:43:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:05.752 11:43:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.752 11:43:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.752 ************************************ 00:23:05.752 START TEST nvmf_target_core_interrupt_mode 00:23:05.752 ************************************ 00:23:05.752 11:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:05.752 * Looking for test storage... 00:23:05.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:23:05.752 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.753 --rc genhtml_branch_coverage=1 00:23:05.753 --rc genhtml_function_coverage=1 00:23:05.753 --rc genhtml_legend=1 00:23:05.753 --rc geninfo_all_blocks=1 00:23:05.753 --rc geninfo_unexecuted_blocks=1 00:23:05.753 00:23:05.753 ' 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.753 --rc genhtml_branch_coverage=1 00:23:05.753 --rc genhtml_function_coverage=1 00:23:05.753 --rc genhtml_legend=1 00:23:05.753 --rc geninfo_all_blocks=1 00:23:05.753 --rc geninfo_unexecuted_blocks=1 00:23:05.753 00:23:05.753 ' 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.753 --rc genhtml_branch_coverage=1 00:23:05.753 --rc genhtml_function_coverage=1 00:23:05.753 --rc genhtml_legend=1 00:23:05.753 --rc geninfo_all_blocks=1 00:23:05.753 --rc geninfo_unexecuted_blocks=1 00:23:05.753 00:23:05.753 ' 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.753 --rc genhtml_branch_coverage=1 00:23:05.753 --rc genhtml_function_coverage=1 00:23:05.753 --rc genhtml_legend=1 00:23:05.753 --rc geninfo_all_blocks=1 00:23:05.753 --rc geninfo_unexecuted_blocks=1 00:23:05.753 00:23:05.753 ' 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.753 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:06.012 ************************************ 00:23:06.012 START TEST nvmf_abort 00:23:06.012 ************************************ 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:06.012 * Looking for test storage... 00:23:06.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:23:06.012 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.013 --rc genhtml_branch_coverage=1 00:23:06.013 --rc genhtml_function_coverage=1 00:23:06.013 --rc genhtml_legend=1 00:23:06.013 --rc geninfo_all_blocks=1 00:23:06.013 --rc geninfo_unexecuted_blocks=1 00:23:06.013 00:23:06.013 ' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.013 --rc genhtml_branch_coverage=1 00:23:06.013 --rc genhtml_function_coverage=1 00:23:06.013 --rc genhtml_legend=1 00:23:06.013 --rc geninfo_all_blocks=1 00:23:06.013 --rc geninfo_unexecuted_blocks=1 00:23:06.013 00:23:06.013 ' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.013 --rc genhtml_branch_coverage=1 00:23:06.013 --rc genhtml_function_coverage=1 00:23:06.013 --rc genhtml_legend=1 00:23:06.013 --rc geninfo_all_blocks=1 00:23:06.013 --rc geninfo_unexecuted_blocks=1 00:23:06.013 00:23:06.013 ' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:06.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.013 --rc genhtml_branch_coverage=1 00:23:06.013 --rc genhtml_function_coverage=1 00:23:06.013 --rc genhtml_legend=1 00:23:06.013 --rc geninfo_all_blocks=1 00:23:06.013 --rc geninfo_unexecuted_blocks=1 00:23:06.013 00:23:06.013 ' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.013 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:06.014 Cannot find device "nvmf_init_br" 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:06.014 Cannot find device "nvmf_init_br2" 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:23:06.014 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:06.274 Cannot find device "nvmf_tgt_br" 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:06.274 Cannot find device "nvmf_tgt_br2" 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:06.274 Cannot find device "nvmf_init_br" 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:06.274 Cannot find device "nvmf_init_br2" 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:06.274 Cannot find device "nvmf_tgt_br" 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:06.274 Cannot find device "nvmf_tgt_br2" 00:23:06.274 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:06.275 Cannot find device "nvmf_br" 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:06.275 Cannot find device "nvmf_init_if" 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:06.275 Cannot find device "nvmf_init_if2" 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:06.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:06.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:06.275 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:06.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:06.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:23:06.534 00:23:06.534 --- 10.0.0.3 ping statistics --- 00:23:06.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.534 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:06.534 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:06.534 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:23:06.534 00:23:06.534 --- 10.0.0.4 ping statistics --- 00:23:06.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.534 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:06.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:06.534 00:23:06.534 --- 10.0.0.1 ping statistics --- 00:23:06.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.534 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:06.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:23:06.534 00:23:06.534 --- 10.0.0.2 ping statistics --- 00:23:06.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.534 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=99963 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:06.534 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 99963 00:23:06.535 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 99963 ']' 00:23:06.535 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.535 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.535 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.535 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.535 11:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:06.535 [2024-12-16 11:43:11.849838] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:06.535 [2024-12-16 11:43:11.851196] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:23:06.535 [2024-12-16 11:43:11.851266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.793 [2024-12-16 11:43:12.005373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:06.793 [2024-12-16 11:43:12.070783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.793 [2024-12-16 11:43:12.070863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.793 [2024-12-16 11:43:12.070878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.793 [2024-12-16 11:43:12.070888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.793 [2024-12-16 11:43:12.070897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.793 [2024-12-16 11:43:12.072212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.793 [2024-12-16 11:43:12.072354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.793 [2024-12-16 11:43:12.072360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.793 [2024-12-16 11:43:12.183120] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:06.793 [2024-12-16 11:43:12.183417] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:06.794 [2024-12-16 11:43:12.183644] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:06.794 [2024-12-16 11:43:12.183716] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 [2024-12-16 11:43:12.885460] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 Malloc0 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 Delay0 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 [2024-12-16 11:43:12.969793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.730 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:23:07.989 [2024-12-16 11:43:13.157291] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:09.922 Initializing NVMe Controllers 00:23:09.922 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:23:09.922 controller IO queue size 128 less than required 00:23:09.922 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:23:09.922 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:09.922 Initialization complete. Launching workers. 00:23:09.922 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25888 00:23:09.922 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25945, failed to submit 66 00:23:09.922 success 25888, unsuccessful 57, failed 0 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.922 rmmod nvme_tcp 00:23:09.922 rmmod nvme_fabrics 00:23:09.922 rmmod nvme_keyring 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 99963 ']' 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 99963 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 99963 ']' 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 99963 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.922 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99963 00:23:10.180 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.180 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.180 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99963' 00:23:10.180 killing process with pid 99963 00:23:10.180 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 99963 00:23:10.180 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 99963 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.439 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:23:10.698 00:23:10.698 real 0m4.701s 00:23:10.698 user 0m9.157s 00:23:10.698 sys 0m1.590s 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.698 ************************************ 00:23:10.698 END TEST nvmf_abort 00:23:10.698 ************************************ 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:10.698 ************************************ 00:23:10.698 START TEST nvmf_ns_hotplug_stress 00:23:10.698 ************************************ 00:23:10.698 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:10.698 * Looking for test storage... 00:23:10.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.698 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:23:10.699 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.699 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:10.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.699 --rc genhtml_branch_coverage=1 00:23:10.699 --rc genhtml_function_coverage=1 00:23:10.699 --rc genhtml_legend=1 00:23:10.699 --rc geninfo_all_blocks=1 00:23:10.699 --rc geninfo_unexecuted_blocks=1 00:23:10.699 00:23:10.699 ' 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:10.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.958 --rc genhtml_branch_coverage=1 00:23:10.958 --rc genhtml_function_coverage=1 00:23:10.958 --rc genhtml_legend=1 00:23:10.958 --rc geninfo_all_blocks=1 00:23:10.958 --rc geninfo_unexecuted_blocks=1 00:23:10.958 00:23:10.958 ' 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:10.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.958 --rc genhtml_branch_coverage=1 00:23:10.958 --rc genhtml_function_coverage=1 00:23:10.958 --rc genhtml_legend=1 00:23:10.958 --rc geninfo_all_blocks=1 00:23:10.958 --rc geninfo_unexecuted_blocks=1 00:23:10.958 00:23:10.958 ' 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:10.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.958 --rc genhtml_branch_coverage=1 00:23:10.958 --rc genhtml_function_coverage=1 00:23:10.958 --rc genhtml_legend=1 00:23:10.958 --rc geninfo_all_blocks=1 00:23:10.958 --rc geninfo_unexecuted_blocks=1 00:23:10.958 00:23:10.958 ' 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.958 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:10.959 Cannot find device "nvmf_init_br" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:10.959 Cannot find device "nvmf_init_br2" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:10.959 Cannot find device "nvmf_tgt_br" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.959 Cannot find device "nvmf_tgt_br2" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:10.959 Cannot find device "nvmf_init_br" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:10.959 Cannot find device "nvmf_init_br2" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:10.959 Cannot find device "nvmf_tgt_br" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:10.959 Cannot find device "nvmf_tgt_br2" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:10.959 Cannot find device "nvmf_br" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:10.959 Cannot find device "nvmf_init_if" 00:23:10.959 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:10.960 Cannot find device "nvmf_init_if2" 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:10.960 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:11.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:11.219 00:23:11.219 --- 10.0.0.3 ping statistics --- 00:23:11.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.219 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:11.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:11.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:23:11.219 00:23:11.219 --- 10.0.0.4 ping statistics --- 00:23:11.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.219 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:11.219 00:23:11.219 --- 10.0.0.1 ping statistics --- 00:23:11.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.219 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:11.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:23:11.219 00:23:11.219 --- 10.0.0.2 ping statistics --- 00:23:11.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.219 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:23:11.219 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=100278 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 100278 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 100278 ']' 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.220 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:11.220 [2024-12-16 11:43:16.605261] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:11.220 [2024-12-16 11:43:16.606564] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:23:11.220 [2024-12-16 11:43:16.606636] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.478 [2024-12-16 11:43:16.763413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:11.478 [2024-12-16 11:43:16.825093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.478 [2024-12-16 11:43:16.825145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.479 [2024-12-16 11:43:16.825173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.479 [2024-12-16 11:43:16.825182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.479 [2024-12-16 11:43:16.825189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.479 [2024-12-16 11:43:16.826311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.479 [2024-12-16 11:43:16.827074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.479 [2024-12-16 11:43:16.827087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.737 [2024-12-16 11:43:16.928458] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:11.737 [2024-12-16 11:43:16.928470] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:11.737 [2024-12-16 11:43:16.928906] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:11.737 [2024-12-16 11:43:16.929583] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:11.737 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.737 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:23:11.737 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.737 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.737 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:11.737 11:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.737 11:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:23:11.737 11:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:11.996 [2024-12-16 11:43:17.308227] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.996 11:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:12.255 11:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:12.823 [2024-12-16 11:43:17.932710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:12.823 11:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:13.081 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:23:13.341 Malloc0 00:23:13.341 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:13.600 Delay0 00:23:13.600 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:13.858 11:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:23:14.117 NULL1 00:23:14.117 11:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:23:14.375 11:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:23:14.375 11:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=100402 00:23:14.375 11:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:14.375 11:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:15.751 Read completed with error (sct=0, sc=11) 00:23:15.751 11:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:15.751 11:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:23:15.751 11:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:23:16.010 true 00:23:16.010 11:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:16.010 11:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:16.946 11:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:17.204 11:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:23:17.204 11:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:23:17.462 true 00:23:17.462 11:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:17.462 11:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:17.720 11:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:17.979 11:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:23:17.979 11:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:23:18.237 true 00:23:18.237 11:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:18.237 11:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:18.496 11:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:18.755 11:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:23:18.755 11:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:23:19.013 true 00:23:19.013 11:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:19.013 11:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:19.948 11:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:20.206 11:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:23:20.206 11:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:23:20.464 true 00:23:20.464 11:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:20.464 11:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:20.723 11:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:20.982 11:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:23:20.982 11:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:23:21.243 true 00:23:21.243 11:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:21.243 11:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:21.505 11:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:21.764 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:23:21.764 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:23:22.023 true 00:23:22.023 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:22.023 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:22.958 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:23.217 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:23:23.217 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:23:23.476 true 00:23:23.476 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:23.476 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:23.734 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:23.992 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:23:23.992 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:23:24.250 true 00:23:24.250 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:24.250 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:24.509 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:24.768 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:23:24.768 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:23:25.027 true 00:23:25.027 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:25.027 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:25.963 11:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:26.222 11:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:23:26.223 11:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:23:26.481 true 00:23:26.482 11:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:26.482 11:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:26.741 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:26.999 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:23:26.999 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:23:27.258 true 00:23:27.258 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:27.258 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:27.517 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:27.776 11:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:23:27.776 11:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:23:28.042 true 00:23:28.042 11:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:28.042 11:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:28.989 11:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:29.248 11:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:23:29.248 11:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:23:29.506 true 00:23:29.506 11:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:29.506 11:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:29.765 11:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:30.023 11:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:23:30.023 11:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:23:30.281 true 00:23:30.281 11:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:30.281 11:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:30.539 11:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:30.798 11:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:23:30.798 11:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:23:31.056 true 00:23:31.056 11:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:31.056 11:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:31.992 11:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:32.250 11:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:23:32.250 11:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:23:32.510 true 00:23:32.510 11:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:32.510 11:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:32.768 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:33.027 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:23:33.027 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:23:33.286 true 00:23:33.286 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:33.286 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:33.544 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:33.807 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:23:33.807 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:23:34.070 true 00:23:34.070 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:34.070 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:35.005 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:35.264 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:23:35.264 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:23:35.522 true 00:23:35.522 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:35.522 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:35.780 11:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:36.038 11:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:23:36.038 11:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:23:36.297 true 00:23:36.297 11:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:36.297 11:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:36.555 11:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:36.814 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:23:36.814 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:23:37.072 true 00:23:37.072 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:37.072 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:38.007 11:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:38.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:38.265 11:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:23:38.265 11:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:23:38.524 true 00:23:38.524 11:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:38.524 11:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:39.091 11:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:39.091 11:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:23:39.091 11:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:23:39.348 true 00:23:39.606 11:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:39.606 11:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:39.864 11:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:40.124 11:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:23:40.124 11:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:23:40.382 true 00:23:40.382 11:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:40.382 11:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:40.640 11:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:40.898 11:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:23:40.898 11:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:23:41.155 true 00:23:41.156 11:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:41.156 11:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:42.091 11:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:42.350 11:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:23:42.350 11:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:23:42.607 true 00:23:42.608 11:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:42.608 11:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:42.865 11:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:43.124 11:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:23:43.124 11:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:23:43.382 true 00:23:43.382 11:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:43.382 11:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:43.640 11:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:43.899 11:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:23:43.899 11:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:23:44.157 true 00:23:44.157 11:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:44.157 11:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:45.092 Initializing NVMe Controllers 00:23:45.092 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.092 Controller IO queue size 128, less than required. 00:23:45.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.092 Controller IO queue size 128, less than required. 00:23:45.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.092 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:45.092 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:45.093 Initialization complete. Launching workers. 00:23:45.093 ======================================================== 00:23:45.093 Latency(us) 00:23:45.093 Device Information : IOPS MiB/s Average min max 00:23:45.093 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 353.57 0.17 144773.54 3420.50 1020322.22 00:23:45.093 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7770.01 3.79 16473.61 3333.90 601010.72 00:23:45.093 ======================================================== 00:23:45.093 Total : 8123.58 3.97 22057.66 3333.90 1020322.22 00:23:45.093 00:23:45.093 11:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:45.351 11:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:23:45.351 11:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:23:45.610 true 00:23:45.610 11:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100402 00:23:45.610 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (100402) - No such process 00:23:45.610 11:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 100402 00:23:45.610 11:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:45.869 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:46.202 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:23:46.202 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:23:46.202 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:23:46.202 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:46.202 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:23:46.460 null0 00:23:46.460 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:46.460 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:46.460 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:23:46.718 null1 00:23:46.718 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:46.718 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:46.718 11:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:23:46.977 null2 00:23:46.977 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:46.977 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:46.977 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:23:47.235 null3 00:23:47.235 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:47.235 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:47.235 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:23:47.804 null4 00:23:47.804 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:47.804 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:47.804 11:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:23:47.804 null5 00:23:47.804 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:47.804 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:47.804 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:23:48.373 null6 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:23:48.373 null7 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:23:48.373 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:48.374 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 101442 101443 101446 101448 101449 101452 101453 101455 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.633 11:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:48.891 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.150 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:49.409 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.668 11:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:49.668 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.668 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.668 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:49.668 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.668 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.668 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:49.668 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.669 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.669 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:49.926 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.184 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:50.442 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:50.701 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:50.701 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:50.701 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:50.701 11:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:50.701 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.960 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:51.219 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:51.477 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:51.477 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.477 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.478 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:51.736 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.736 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.736 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:51.736 11:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:51.737 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.737 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.737 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:51.737 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:51.737 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:51.737 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.995 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.996 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:51.996 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.996 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.996 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:52.254 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.255 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.513 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:52.772 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.772 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.772 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:52.773 11:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:52.773 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.031 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.032 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:53.032 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.294 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:53.558 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:53.817 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.817 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.817 11:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:53.817 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.076 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:54.077 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.077 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.077 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:54.077 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:54.335 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:54.335 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.336 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.594 11:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.594 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.594 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.853 rmmod nvme_tcp 00:23:54.853 rmmod nvme_fabrics 00:23:54.853 rmmod nvme_keyring 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 100278 ']' 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 100278 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 100278 ']' 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 100278 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.853 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100278 00:23:55.112 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:55.112 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:55.112 killing process with pid 100278 00:23:55.112 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100278' 00:23:55.112 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 100278 00:23:55.112 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 100278 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.371 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:23:55.630 00:23:55.630 real 0m44.898s 00:23:55.630 user 3m22.548s 00:23:55.630 sys 0m17.298s 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:55.630 ************************************ 00:23:55.630 END TEST nvmf_ns_hotplug_stress 00:23:55.630 ************************************ 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:55.630 ************************************ 00:23:55.630 START TEST nvmf_delete_subsystem 00:23:55.630 ************************************ 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:55.630 * Looking for test storage... 00:23:55.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:55.630 11:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:23:55.630 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:55.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.890 --rc genhtml_branch_coverage=1 00:23:55.890 --rc genhtml_function_coverage=1 00:23:55.890 --rc genhtml_legend=1 00:23:55.890 --rc geninfo_all_blocks=1 00:23:55.890 --rc geninfo_unexecuted_blocks=1 00:23:55.890 00:23:55.890 ' 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:55.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.890 --rc genhtml_branch_coverage=1 00:23:55.890 --rc genhtml_function_coverage=1 00:23:55.890 --rc genhtml_legend=1 00:23:55.890 --rc geninfo_all_blocks=1 00:23:55.890 --rc geninfo_unexecuted_blocks=1 00:23:55.890 00:23:55.890 ' 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:55.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.890 --rc genhtml_branch_coverage=1 00:23:55.890 --rc genhtml_function_coverage=1 00:23:55.890 --rc genhtml_legend=1 00:23:55.890 --rc geninfo_all_blocks=1 00:23:55.890 --rc geninfo_unexecuted_blocks=1 00:23:55.890 00:23:55.890 ' 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:55.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.890 --rc genhtml_branch_coverage=1 00:23:55.890 --rc genhtml_function_coverage=1 00:23:55.890 --rc genhtml_legend=1 00:23:55.890 --rc geninfo_all_blocks=1 00:23:55.890 --rc geninfo_unexecuted_blocks=1 00:23:55.890 00:23:55.890 ' 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:23:55.890 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:55.891 Cannot find device "nvmf_init_br" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:55.891 Cannot find device "nvmf_init_br2" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:55.891 Cannot find device "nvmf_tgt_br" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.891 Cannot find device "nvmf_tgt_br2" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:55.891 Cannot find device "nvmf_init_br" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:55.891 Cannot find device "nvmf_init_br2" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:55.891 Cannot find device "nvmf_tgt_br" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:55.891 Cannot find device "nvmf_tgt_br2" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:55.891 Cannot find device "nvmf_br" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:55.891 Cannot find device "nvmf_init_if" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:55.891 Cannot find device "nvmf_init_if2" 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:23:55.891 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:55.892 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:56.151 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:56.151 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:56.151 00:23:56.151 --- 10.0.0.3 ping statistics --- 00:23:56.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.151 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:56.151 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:56.151 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:23:56.151 00:23:56.151 --- 10.0.0.4 ping statistics --- 00:23:56.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.151 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:56.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:56.151 00:23:56.151 --- 10.0.0.1 ping statistics --- 00:23:56.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.151 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:56.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:23:56.151 00:23:56.151 --- 10.0.0.2 ping statistics --- 00:23:56.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.151 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:23:56.151 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=102845 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 102845 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 102845 ']' 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.152 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.152 [2024-12-16 11:44:01.536461] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:56.152 [2024-12-16 11:44:01.537767] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:23:56.152 [2024-12-16 11:44:01.537847] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.410 [2024-12-16 11:44:01.690232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:56.410 [2024-12-16 11:44:01.750330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.410 [2024-12-16 11:44:01.750598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.410 [2024-12-16 11:44:01.750773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.410 [2024-12-16 11:44:01.751095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.410 [2024-12-16 11:44:01.751297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.410 [2024-12-16 11:44:01.752774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.410 [2024-12-16 11:44:01.752787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.668 [2024-12-16 11:44:01.856175] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:56.668 [2024-12-16 11:44:01.856505] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:56.668 [2024-12-16 11:44:01.856794] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.668 [2024-12-16 11:44:01.938196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.668 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.669 [2024-12-16 11:44:01.958534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.669 NULL1 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.669 Delay0 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=102877 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:23:56.669 11:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:56.927 [2024-12-16 11:44:02.159513] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.830 11:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.830 11:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.830 11:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 [2024-12-16 11:44:04.193593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98da50 is same with the state(6) to be set 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 [2024-12-16 11:44:04.194099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990ea0 is same with the state(6) to be set 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.830 Write completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 Read completed with error (sct=0, sc=8) 00:23:58.830 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Read completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 Write completed with error (sct=0, sc=8) 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:58.831 starting I/O failed: -6 00:23:59.767 [2024-12-16 11:44:05.174712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x982aa0 is same with the state(6) to be set 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 [2024-12-16 11:44:05.192501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efe5c00d840 is same with the state(6) to be set 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 [2024-12-16 11:44:05.196041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e7e0 is same with the state(6) to be set 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 [2024-12-16 11:44:05.196343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98dc30 is same with the state(6) to be set 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Read completed with error (sct=0, sc=8) 00:24:00.026 Write completed with error (sct=0, sc=8) 00:24:00.026 [2024-12-16 11:44:05.196887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efe5c00d060 is same with the state(6) to be set 00:24:00.026 Initializing NVMe Controllers 00:24:00.026 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.026 Controller IO queue size 128, less than required. 00:24:00.026 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.026 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:00.026 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:00.026 Initialization complete. Launching workers. 00:24:00.026 ======================================================== 00:24:00.026 Latency(us) 00:24:00.026 Device Information : IOPS MiB/s Average min max 00:24:00.027 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 156.78 0.08 927646.03 527.49 1013669.86 00:24:00.027 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.58 0.09 918524.13 425.05 1016053.75 00:24:00.027 ======================================================== 00:24:00.027 Total : 338.36 0.17 922750.70 425.05 1016053.75 00:24:00.027 00:24:00.027 [2024-12-16 11:44:05.198069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x982aa0 (9): Bad file descriptor 00:24:00.027 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:00.027 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.027 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:24:00.027 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102877 00:24:00.027 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102877 00:24:00.594 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (102877) - No such process 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 102877 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 102877 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 102877 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:00.594 [2024-12-16 11:44:05.726502] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=102925 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:00.594 11:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:00.594 [2024-12-16 11:44:05.915869] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:00.853 11:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:00.853 11:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:00.853 11:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:01.445 11:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:01.445 11:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:01.445 11:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:02.013 11:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:02.013 11:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:02.013 11:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:02.581 11:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:02.581 11:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:02.581 11:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:03.148 11:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:03.148 11:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:03.148 11:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:03.407 11:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:03.407 11:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:03.407 11:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:03.666 Initializing NVMe Controllers 00:24:03.666 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.666 Controller IO queue size 128, less than required. 00:24:03.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:03.666 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:03.666 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:03.666 Initialization complete. Launching workers. 00:24:03.666 ======================================================== 00:24:03.666 Latency(us) 00:24:03.666 Device Information : IOPS MiB/s Average min max 00:24:03.666 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003378.63 1000164.73 1044267.70 00:24:03.666 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005980.21 1000232.84 1042609.01 00:24:03.666 ======================================================== 00:24:03.666 Total : 256.00 0.12 1004679.42 1000164.73 1044267.70 00:24:03.666 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102925 00:24:03.924 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (102925) - No such process 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 102925 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.924 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.924 rmmod nvme_tcp 00:24:04.182 rmmod nvme_fabrics 00:24:04.182 rmmod nvme_keyring 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 102845 ']' 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 102845 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 102845 ']' 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 102845 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102845 00:24:04.182 killing process with pid 102845 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102845' 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 102845 00:24:04.182 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 102845 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.441 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.699 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:24:04.699 00:24:04.699 real 0m8.991s 00:24:04.699 user 0m24.348s 00:24:04.699 sys 0m2.249s 00:24:04.699 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.699 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:04.699 ************************************ 00:24:04.699 END TEST nvmf_delete_subsystem 00:24:04.699 ************************************ 00:24:04.699 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:04.699 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:04.699 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.699 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:04.699 ************************************ 00:24:04.699 START TEST nvmf_host_management 00:24:04.699 ************************************ 00:24:04.700 11:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:04.700 * Looking for test storage... 00:24:04.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.700 --rc genhtml_branch_coverage=1 00:24:04.700 --rc genhtml_function_coverage=1 00:24:04.700 --rc genhtml_legend=1 00:24:04.700 --rc geninfo_all_blocks=1 00:24:04.700 --rc geninfo_unexecuted_blocks=1 00:24:04.700 00:24:04.700 ' 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.700 --rc genhtml_branch_coverage=1 00:24:04.700 --rc genhtml_function_coverage=1 00:24:04.700 --rc genhtml_legend=1 00:24:04.700 --rc geninfo_all_blocks=1 00:24:04.700 --rc geninfo_unexecuted_blocks=1 00:24:04.700 00:24:04.700 ' 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.700 --rc genhtml_branch_coverage=1 00:24:04.700 --rc genhtml_function_coverage=1 00:24:04.700 --rc genhtml_legend=1 00:24:04.700 --rc geninfo_all_blocks=1 00:24:04.700 --rc geninfo_unexecuted_blocks=1 00:24:04.700 00:24:04.700 ' 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.700 --rc genhtml_branch_coverage=1 00:24:04.700 --rc genhtml_function_coverage=1 00:24:04.700 --rc genhtml_legend=1 00:24:04.700 --rc geninfo_all_blocks=1 00:24:04.700 --rc geninfo_unexecuted_blocks=1 00:24:04.700 00:24:04.700 ' 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.700 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.958 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:04.959 Cannot find device "nvmf_init_br" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:04.959 Cannot find device "nvmf_init_br2" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:04.959 Cannot find device "nvmf_tgt_br" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.959 Cannot find device "nvmf_tgt_br2" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:04.959 Cannot find device "nvmf_init_br" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:04.959 Cannot find device "nvmf_init_br2" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:04.959 Cannot find device "nvmf_tgt_br" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:04.959 Cannot find device "nvmf_tgt_br2" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:04.959 Cannot find device "nvmf_br" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:04.959 Cannot find device "nvmf_init_if" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:04.959 Cannot find device "nvmf_init_if2" 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:04.959 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:04.960 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:04.960 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:04.960 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:04.960 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:04.960 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.218 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:24:05.219 00:24:05.219 --- 10.0.0.3 ping statistics --- 00:24:05.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.219 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:24:05.219 00:24:05.219 --- 10.0.0.4 ping statistics --- 00:24:05.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.219 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:05.219 00:24:05.219 --- 10.0.0.1 ping statistics --- 00:24:05.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.219 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:24:05.219 00:24:05.219 --- 10.0.0.2 ping statistics --- 00:24:05.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.219 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=103207 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 103207 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 103207 ']' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.219 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.219 [2024-12-16 11:44:10.604577] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:05.219 [2024-12-16 11:44:10.606114] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:24:05.219 [2024-12-16 11:44:10.606188] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.477 [2024-12-16 11:44:10.761583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.477 [2024-12-16 11:44:10.826022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.477 [2024-12-16 11:44:10.826337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.477 [2024-12-16 11:44:10.826362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.477 [2024-12-16 11:44:10.826376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.477 [2024-12-16 11:44:10.826386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.477 [2024-12-16 11:44:10.827766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.477 [2024-12-16 11:44:10.827950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.477 [2024-12-16 11:44:10.828084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:24:05.477 [2024-12-16 11:44:10.828086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.737 [2024-12-16 11:44:10.932079] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:05.737 [2024-12-16 11:44:10.932724] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:05.737 [2024-12-16 11:44:10.932944] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:05.737 [2024-12-16 11:44:10.933051] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:05.737 [2024-12-16 11:44:10.933799] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:05.737 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.737 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:05.737 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.737 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.737 11:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.737 [2024-12-16 11:44:11.021536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.737 Malloc0 00:24:05.737 [2024-12-16 11:44:11.109758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.737 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=103264 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 103264 /var/tmp/bdevperf.sock 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 103264 ']' 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:05.995 { 00:24:05.995 "params": { 00:24:05.995 "name": "Nvme$subsystem", 00:24:05.995 "trtype": "$TEST_TRANSPORT", 00:24:05.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.995 "adrfam": "ipv4", 00:24:05.995 "trsvcid": "$NVMF_PORT", 00:24:05.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.995 "hdgst": ${hdgst:-false}, 00:24:05.995 "ddgst": ${ddgst:-false} 00:24:05.995 }, 00:24:05.995 "method": "bdev_nvme_attach_controller" 00:24:05.995 } 00:24:05.995 EOF 00:24:05.995 )") 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:05.995 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:05.995 "params": { 00:24:05.995 "name": "Nvme0", 00:24:05.995 "trtype": "tcp", 00:24:05.995 "traddr": "10.0.0.3", 00:24:05.995 "adrfam": "ipv4", 00:24:05.995 "trsvcid": "4420", 00:24:05.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:05.995 "hdgst": false, 00:24:05.995 "ddgst": false 00:24:05.995 }, 00:24:05.995 "method": "bdev_nvme_attach_controller" 00:24:05.995 }' 00:24:05.995 [2024-12-16 11:44:11.215466] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:24:05.995 [2024-12-16 11:44:11.215563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103264 ] 00:24:05.995 [2024-12-16 11:44:11.366331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.254 [2024-12-16 11:44:11.433133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.254 Running I/O for 10 seconds... 00:24:06.512 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.512 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:06.512 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:06.512 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.512 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:06.512 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.512 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:24:06.513 11:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=555 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 555 -ge 100 ']' 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.773 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:06.773 [2024-12-16 11:44:12.073576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.773 [2024-12-16 11:44:12.073947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.774 [2024-12-16 11:44:12.073958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.774 [2024-12-16 11:44:12.073966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.774 [2024-12-16 11:44:12.073975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80530 is same with the state(6) to be set 00:24:06.774 [2024-12-16 11:44:12.077686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.077977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.077986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.774 [2024-12-16 11:44:12.078162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:06.774 [2024-12-16 11:44:12.078418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.774 [2024-12-16 11:44:12.078508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.774 [2024-12-16 11:44:12.078519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.775 [2024-12-16 11:44:12.078619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:06.775 [2024-12-16 11:44:12.078805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.078985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.078994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.079025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.079045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.775 [2024-12-16 11:44:12.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:06.775 [2024-12-16 11:44:12.079256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.775 [2024-12-16 11:44:12.079274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.775 [2024-12-16 11:44:12.079295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.775 [2024-12-16 11:44:12.079314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.775 [2024-12-16 11:44:12.079333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.775 [2024-12-16 11:44:12.079342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f130 is same with the state(6) to be set 00:24:06.775 [2024-12-16 11:44:12.080445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:06.775 task offset: 81920 on job bdev=Nvme0n1 fails 00:24:06.775 00:24:06.775 Latency(us) 00:24:06.775 [2024-12-16T11:44:12.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.775 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.775 Job: Nvme0n1 ended in about 0.46 seconds with error 00:24:06.775 Verification LBA range: start 0x0 length 0x400 00:24:06.775 Nvme0n1 : 0.46 1397.27 87.33 139.73 0.00 40000.56 2323.55 44564.48 00:24:06.775 [2024-12-16T11:44:12.189Z] =================================================================================================================== 00:24:06.775 [2024-12-16T11:44:12.189Z] Total : 1397.27 87.33 139.73 0.00 40000.56 2323.55 44564.48 00:24:06.775 [2024-12-16 11:44:12.082377] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:06.775 [2024-12-16 11:44:12.082403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140f130 (9): Bad file descriptor 00:24:06.775 [2024-12-16 11:44:12.085752] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:06.775 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.775 11:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 103264 00:24:07.712 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (103264) - No such process 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.712 { 00:24:07.712 "params": { 00:24:07.712 "name": "Nvme$subsystem", 00:24:07.712 "trtype": "$TEST_TRANSPORT", 00:24:07.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.712 "adrfam": "ipv4", 00:24:07.712 "trsvcid": "$NVMF_PORT", 00:24:07.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.712 "hdgst": ${hdgst:-false}, 00:24:07.712 "ddgst": ${ddgst:-false} 00:24:07.712 }, 00:24:07.712 "method": "bdev_nvme_attach_controller" 00:24:07.712 } 00:24:07.712 EOF 00:24:07.712 )") 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:07.712 11:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:07.712 "params": { 00:24:07.712 "name": "Nvme0", 00:24:07.712 "trtype": "tcp", 00:24:07.712 "traddr": "10.0.0.3", 00:24:07.712 "adrfam": "ipv4", 00:24:07.712 "trsvcid": "4420", 00:24:07.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:07.712 "hdgst": false, 00:24:07.712 "ddgst": false 00:24:07.712 }, 00:24:07.712 "method": "bdev_nvme_attach_controller" 00:24:07.712 }' 00:24:07.971 [2024-12-16 11:44:13.154902] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:24:07.971 [2024-12-16 11:44:13.155266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103306 ] 00:24:07.971 [2024-12-16 11:44:13.306447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.971 [2024-12-16 11:44:13.366334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.229 Running I/O for 1 seconds... 00:24:09.422 1510.00 IOPS, 94.38 MiB/s 00:24:09.422 Latency(us) 00:24:09.422 [2024-12-16T11:44:14.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.422 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.422 Verification LBA range: start 0x0 length 0x400 00:24:09.422 Nvme0n1 : 1.04 1533.28 95.83 0.00 0.00 40950.92 5213.09 36700.16 00:24:09.422 [2024-12-16T11:44:14.836Z] =================================================================================================================== 00:24:09.422 [2024-12-16T11:44:14.836Z] Total : 1533.28 95.83 0.00 0.00 40950.92 5213.09 36700.16 00:24:09.422 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:24:09.422 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:24:09.422 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:24:09.422 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:09.422 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:24:09.422 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.422 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.682 rmmod nvme_tcp 00:24:09.682 rmmod nvme_fabrics 00:24:09.682 rmmod nvme_keyring 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 103207 ']' 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 103207 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 103207 ']' 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 103207 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103207 00:24:09.682 killing process with pid 103207 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103207' 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 103207 00:24:09.682 11:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 103207 00:24:09.945 [2024-12-16 11:44:15.180541] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:09.945 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:10.206 00:24:10.206 real 0m5.549s 00:24:10.206 user 0m17.876s 00:24:10.206 sys 0m2.190s 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.206 ************************************ 00:24:10.206 END TEST nvmf_host_management 00:24:10.206 ************************************ 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:10.206 ************************************ 00:24:10.206 START TEST nvmf_lvol 00:24:10.206 ************************************ 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:10.206 * Looking for test storage... 00:24:10.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.206 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.466 --rc genhtml_branch_coverage=1 00:24:10.466 --rc genhtml_function_coverage=1 00:24:10.466 --rc genhtml_legend=1 00:24:10.466 --rc geninfo_all_blocks=1 00:24:10.466 --rc geninfo_unexecuted_blocks=1 00:24:10.466 00:24:10.466 ' 00:24:10.466 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.467 --rc genhtml_branch_coverage=1 00:24:10.467 --rc genhtml_function_coverage=1 00:24:10.467 --rc genhtml_legend=1 00:24:10.467 --rc geninfo_all_blocks=1 00:24:10.467 --rc geninfo_unexecuted_blocks=1 00:24:10.467 00:24:10.467 ' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.467 --rc genhtml_branch_coverage=1 00:24:10.467 --rc genhtml_function_coverage=1 00:24:10.467 --rc genhtml_legend=1 00:24:10.467 --rc geninfo_all_blocks=1 00:24:10.467 --rc geninfo_unexecuted_blocks=1 00:24:10.467 00:24:10.467 ' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.467 --rc genhtml_branch_coverage=1 00:24:10.467 --rc genhtml_function_coverage=1 00:24:10.467 --rc genhtml_legend=1 00:24:10.467 --rc geninfo_all_blocks=1 00:24:10.467 --rc geninfo_unexecuted_blocks=1 00:24:10.467 00:24:10.467 ' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:10.467 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:10.467 Cannot find device "nvmf_init_br" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:10.468 Cannot find device "nvmf_init_br2" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:10.468 Cannot find device "nvmf_tgt_br" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:10.468 Cannot find device "nvmf_tgt_br2" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:10.468 Cannot find device "nvmf_init_br" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:10.468 Cannot find device "nvmf_init_br2" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:10.468 Cannot find device "nvmf_tgt_br" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:10.468 Cannot find device "nvmf_tgt_br2" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:10.468 Cannot find device "nvmf_br" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:10.468 Cannot find device "nvmf_init_if" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:10.468 Cannot find device "nvmf_init_if2" 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:10.468 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:10.728 11:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:10.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:10.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:24:10.728 00:24:10.728 --- 10.0.0.3 ping statistics --- 00:24:10.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.728 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:10.728 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:10.728 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:24:10.728 00:24:10.728 --- 10.0.0.4 ping statistics --- 00:24:10.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.728 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:10.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:24:10.728 00:24:10.728 --- 10.0.0.1 ping statistics --- 00:24:10.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.728 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:10.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:24:10.728 00:24:10.728 --- 10.0.0.2 ping statistics --- 00:24:10.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.728 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=103572 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 103572 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 103572 ']' 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.728 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:10.987 [2024-12-16 11:44:16.191577] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:10.987 [2024-12-16 11:44:16.193092] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:24:10.987 [2024-12-16 11:44:16.193167] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.987 [2024-12-16 11:44:16.344977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:11.246 [2024-12-16 11:44:16.407811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.246 [2024-12-16 11:44:16.408116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.246 [2024-12-16 11:44:16.408145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.246 [2024-12-16 11:44:16.408157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.246 [2024-12-16 11:44:16.408165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.246 [2024-12-16 11:44:16.409397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.246 [2024-12-16 11:44:16.409554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.246 [2024-12-16 11:44:16.409565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.246 [2024-12-16 11:44:16.512362] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:11.246 [2024-12-16 11:44:16.512657] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:11.246 [2024-12-16 11:44:16.513281] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:11.246 [2024-12-16 11:44:16.513333] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:11.246 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.246 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:24:11.246 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:11.246 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.246 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:11.246 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.246 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:11.505 [2024-12-16 11:44:16.814605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.505 11:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:12.073 11:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:24:12.073 11:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:12.073 11:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:24:12.073 11:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:24:12.641 11:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:24:12.641 11:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f8c38b97-40d8-4475-a758-3deb1e6fd253 00:24:12.641 11:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f8c38b97-40d8-4475-a758-3deb1e6fd253 lvol 20 00:24:13.208 11:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0a56fba6-889f-46c2-b4eb-210585f0306c 00:24:13.208 11:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:13.208 11:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a56fba6-889f-46c2-b4eb-210585f0306c 00:24:13.776 11:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:13.776 [2024-12-16 11:44:19.154741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:13.776 11:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:14.374 11:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=103706 00:24:14.374 11:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:24:14.374 11:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:24:15.309 11:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 0a56fba6-889f-46c2-b4eb-210585f0306c MY_SNAPSHOT 00:24:15.568 11:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0866b206-964e-40f0-9d2f-4dc0f958d3b9 00:24:15.568 11:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 0a56fba6-889f-46c2-b4eb-210585f0306c 30 00:24:15.826 11:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0866b206-964e-40f0-9d2f-4dc0f958d3b9 MY_CLONE 00:24:16.085 11:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6e28c93d-d37e-4ed8-9ca5-c0aa9219b10d 00:24:16.085 11:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 6e28c93d-d37e-4ed8-9ca5-c0aa9219b10d 00:24:16.652 11:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 103706 00:24:24.764 Initializing NVMe Controllers 00:24:24.764 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:24:24.765 Controller IO queue size 128, less than required. 00:24:24.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.765 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:24:24.765 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:24:24.765 Initialization complete. Launching workers. 00:24:24.765 ======================================================== 00:24:24.765 Latency(us) 00:24:24.765 Device Information : IOPS MiB/s Average min max 00:24:24.765 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10228.30 39.95 12526.55 6595.64 63533.86 00:24:24.765 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10430.90 40.75 12282.63 4448.71 60735.26 00:24:24.765 ======================================================== 00:24:24.765 Total : 20659.20 80.70 12403.39 4448.71 63533.86 00:24:24.765 00:24:24.765 11:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:24.765 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0a56fba6-889f-46c2-b4eb-210585f0306c 00:24:25.023 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8c38b97-40d8-4475-a758-3deb1e6fd253 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.281 rmmod nvme_tcp 00:24:25.281 rmmod nvme_fabrics 00:24:25.281 rmmod nvme_keyring 00:24:25.281 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 103572 ']' 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 103572 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 103572 ']' 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 103572 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103572 00:24:25.540 killing process with pid 103572 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103572' 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 103572 00:24:25.540 11:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 103572 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:25.799 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:24:26.058 00:24:26.058 real 0m15.727s 00:24:26.058 user 0m56.247s 00:24:26.058 sys 0m5.713s 00:24:26.058 ************************************ 00:24:26.058 END TEST nvmf_lvol 00:24:26.058 ************************************ 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:26.058 ************************************ 00:24:26.058 START TEST nvmf_lvs_grow 00:24:26.058 ************************************ 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:26.058 * Looking for test storage... 00:24:26.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.058 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.317 --rc genhtml_branch_coverage=1 00:24:26.317 --rc genhtml_function_coverage=1 00:24:26.317 --rc genhtml_legend=1 00:24:26.317 --rc geninfo_all_blocks=1 00:24:26.317 --rc geninfo_unexecuted_blocks=1 00:24:26.317 00:24:26.317 ' 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.317 --rc genhtml_branch_coverage=1 00:24:26.317 --rc genhtml_function_coverage=1 00:24:26.317 --rc genhtml_legend=1 00:24:26.317 --rc geninfo_all_blocks=1 00:24:26.317 --rc geninfo_unexecuted_blocks=1 00:24:26.317 00:24:26.317 ' 00:24:26.317 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.317 --rc genhtml_branch_coverage=1 00:24:26.317 --rc genhtml_function_coverage=1 00:24:26.317 --rc genhtml_legend=1 00:24:26.317 --rc geninfo_all_blocks=1 00:24:26.318 --rc geninfo_unexecuted_blocks=1 00:24:26.318 00:24:26.318 ' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.318 --rc genhtml_branch_coverage=1 00:24:26.318 --rc genhtml_function_coverage=1 00:24:26.318 --rc genhtml_legend=1 00:24:26.318 --rc geninfo_all_blocks=1 00:24:26.318 --rc geninfo_unexecuted_blocks=1 00:24:26.318 00:24:26.318 ' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:26.318 Cannot find device "nvmf_init_br" 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:26.318 Cannot find device "nvmf_init_br2" 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:26.318 Cannot find device "nvmf_tgt_br" 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:26.318 Cannot find device "nvmf_tgt_br2" 00:24:26.318 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:26.319 Cannot find device "nvmf_init_br" 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:26.319 Cannot find device "nvmf_init_br2" 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:26.319 Cannot find device "nvmf_tgt_br" 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:26.319 Cannot find device "nvmf_tgt_br2" 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:26.319 Cannot find device "nvmf_br" 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:26.319 Cannot find device "nvmf_init_if" 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:26.319 Cannot find device "nvmf_init_if2" 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:26.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:26.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:26.319 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:26.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:26.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:24:26.578 00:24:26.578 --- 10.0.0.3 ping statistics --- 00:24:26.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.578 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:26.578 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:26.578 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:24:26.578 00:24:26.578 --- 10.0.0.4 ping statistics --- 00:24:26.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.578 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:26.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:26.578 00:24:26.578 --- 10.0.0.1 ping statistics --- 00:24:26.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.578 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:26.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:24:26.578 00:24:26.578 --- 10.0.0.2 ping statistics --- 00:24:26.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.578 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=104125 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 104125 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 104125 ']' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.578 11:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:26.837 [2024-12-16 11:44:32.013459] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:26.837 [2024-12-16 11:44:32.014945] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:24:26.837 [2024-12-16 11:44:32.015058] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.837 [2024-12-16 11:44:32.164147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.837 [2024-12-16 11:44:32.233416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.837 [2024-12-16 11:44:32.233485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.837 [2024-12-16 11:44:32.233496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.837 [2024-12-16 11:44:32.233505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.837 [2024-12-16 11:44:32.233513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.837 [2024-12-16 11:44:32.233932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.095 [2024-12-16 11:44:32.335063] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:27.095 [2024-12-16 11:44:32.335363] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:27.095 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.095 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:24:27.095 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.095 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.095 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:27.095 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.095 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:27.354 [2024-12-16 11:44:32.726828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.354 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:24:27.354 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:27.354 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.354 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:27.647 ************************************ 00:24:27.647 START TEST lvs_grow_clean 00:24:27.647 ************************************ 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:27.647 11:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:27.905 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:27.905 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:28.168 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:28.168 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:28.168 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:28.435 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:28.435 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:28.435 11:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 lvol 150 00:24:28.695 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a1ff3c4-3fe0-4900-a2fd-e71568017a92 00:24:28.695 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:28.695 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:28.954 [2024-12-16 11:44:34.274641] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:28.954 [2024-12-16 11:44:34.274806] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:28.954 true 00:24:28.954 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:28.954 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:29.213 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:29.213 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:29.781 11:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a1ff3c4-3fe0-4900-a2fd-e71568017a92 00:24:29.781 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:30.040 [2024-12-16 11:44:35.423124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.040 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=104272 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 104272 /var/tmp/bdevperf.sock 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 104272 ']' 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.298 11:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:30.557 [2024-12-16 11:44:35.751319] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:24:30.557 [2024-12-16 11:44:35.751472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104272 ] 00:24:30.557 [2024-12-16 11:44:35.894833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.816 [2024-12-16 11:44:35.972801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.816 11:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.816 11:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:24:30.816 11:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:31.075 Nvme0n1 00:24:31.076 11:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:31.334 [ 00:24:31.335 { 00:24:31.335 "aliases": [ 00:24:31.335 "1a1ff3c4-3fe0-4900-a2fd-e71568017a92" 00:24:31.335 ], 00:24:31.335 "assigned_rate_limits": { 00:24:31.335 "r_mbytes_per_sec": 0, 00:24:31.335 "rw_ios_per_sec": 0, 00:24:31.335 "rw_mbytes_per_sec": 0, 00:24:31.335 "w_mbytes_per_sec": 0 00:24:31.335 }, 00:24:31.335 "block_size": 4096, 00:24:31.335 "claimed": false, 00:24:31.335 "driver_specific": { 00:24:31.335 "mp_policy": "active_passive", 00:24:31.335 "nvme": [ 00:24:31.335 { 00:24:31.335 "ctrlr_data": { 00:24:31.335 "ana_reporting": false, 00:24:31.335 "cntlid": 1, 00:24:31.335 "firmware_revision": "25.01", 00:24:31.335 "model_number": "SPDK bdev Controller", 00:24:31.335 "multi_ctrlr": true, 00:24:31.335 "oacs": { 00:24:31.335 "firmware": 0, 00:24:31.335 "format": 0, 00:24:31.335 "ns_manage": 0, 00:24:31.335 "security": 0 00:24:31.335 }, 00:24:31.335 "serial_number": "SPDK0", 00:24:31.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:31.335 "vendor_id": "0x8086" 00:24:31.335 }, 00:24:31.335 "ns_data": { 00:24:31.335 "can_share": true, 00:24:31.335 "id": 1 00:24:31.335 }, 00:24:31.335 "trid": { 00:24:31.335 "adrfam": "IPv4", 00:24:31.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:31.335 "traddr": "10.0.0.3", 00:24:31.335 "trsvcid": "4420", 00:24:31.335 "trtype": "TCP" 00:24:31.335 }, 00:24:31.335 "vs": { 00:24:31.335 "nvme_version": "1.3" 00:24:31.335 } 00:24:31.335 } 00:24:31.335 ] 00:24:31.335 }, 00:24:31.335 "memory_domains": [ 00:24:31.335 { 00:24:31.335 "dma_device_id": "system", 00:24:31.335 "dma_device_type": 1 00:24:31.335 } 00:24:31.335 ], 00:24:31.335 "name": "Nvme0n1", 00:24:31.335 "num_blocks": 38912, 00:24:31.335 "numa_id": -1, 00:24:31.335 "product_name": "NVMe disk", 00:24:31.335 "supported_io_types": { 00:24:31.335 "abort": true, 00:24:31.335 "compare": true, 00:24:31.335 "compare_and_write": true, 00:24:31.335 "copy": true, 00:24:31.335 "flush": true, 00:24:31.335 "get_zone_info": false, 00:24:31.335 "nvme_admin": true, 00:24:31.335 "nvme_io": true, 00:24:31.335 "nvme_io_md": false, 00:24:31.335 "nvme_iov_md": false, 00:24:31.335 "read": true, 00:24:31.335 "reset": true, 00:24:31.335 "seek_data": false, 00:24:31.335 "seek_hole": false, 00:24:31.335 "unmap": true, 00:24:31.335 "write": true, 00:24:31.335 "write_zeroes": true, 00:24:31.335 "zcopy": false, 00:24:31.335 "zone_append": false, 00:24:31.335 "zone_management": false 00:24:31.335 }, 00:24:31.335 "uuid": "1a1ff3c4-3fe0-4900-a2fd-e71568017a92", 00:24:31.335 "zoned": false 00:24:31.335 } 00:24:31.335 ] 00:24:31.335 11:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=104306 00:24:31.335 11:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.335 11:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:31.593 Running I/O for 10 seconds... 00:24:32.529 Latency(us) 00:24:32.529 [2024-12-16T11:44:37.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:32.529 Nvme0n1 : 1.00 6469.00 25.27 0.00 0.00 0.00 0.00 0.00 00:24:32.529 [2024-12-16T11:44:37.943Z] =================================================================================================================== 00:24:32.529 [2024-12-16T11:44:37.943Z] Total : 6469.00 25.27 0.00 0.00 0.00 0.00 0.00 00:24:32.529 00:24:33.465 11:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:33.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:33.465 Nvme0n1 : 2.00 6726.50 26.28 0.00 0.00 0.00 0.00 0.00 00:24:33.465 [2024-12-16T11:44:38.879Z] =================================================================================================================== 00:24:33.465 [2024-12-16T11:44:38.879Z] Total : 6726.50 26.28 0.00 0.00 0.00 0.00 0.00 00:24:33.465 00:24:33.724 true 00:24:33.724 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:33.724 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:34.018 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:34.018 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:34.018 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 104306 00:24:34.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:34.584 Nvme0n1 : 3.00 6825.67 26.66 0.00 0.00 0.00 0.00 0.00 00:24:34.584 [2024-12-16T11:44:39.998Z] =================================================================================================================== 00:24:34.584 [2024-12-16T11:44:39.998Z] Total : 6825.67 26.66 0.00 0.00 0.00 0.00 0.00 00:24:34.584 00:24:35.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:35.520 Nvme0n1 : 4.00 6879.25 26.87 0.00 0.00 0.00 0.00 0.00 00:24:35.520 [2024-12-16T11:44:40.934Z] =================================================================================================================== 00:24:35.520 [2024-12-16T11:44:40.934Z] Total : 6879.25 26.87 0.00 0.00 0.00 0.00 0.00 00:24:35.520 00:24:36.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:36.456 Nvme0n1 : 5.00 6869.80 26.84 0.00 0.00 0.00 0.00 0.00 00:24:36.456 [2024-12-16T11:44:41.870Z] =================================================================================================================== 00:24:36.456 [2024-12-16T11:44:41.870Z] Total : 6869.80 26.84 0.00 0.00 0.00 0.00 0.00 00:24:36.456 00:24:37.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:37.833 Nvme0n1 : 6.00 6845.00 26.74 0.00 0.00 0.00 0.00 0.00 00:24:37.833 [2024-12-16T11:44:43.247Z] =================================================================================================================== 00:24:37.833 [2024-12-16T11:44:43.247Z] Total : 6845.00 26.74 0.00 0.00 0.00 0.00 0.00 00:24:37.833 00:24:38.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:38.770 Nvme0n1 : 7.00 6841.86 26.73 0.00 0.00 0.00 0.00 0.00 00:24:38.770 [2024-12-16T11:44:44.184Z] =================================================================================================================== 00:24:38.770 [2024-12-16T11:44:44.184Z] Total : 6841.86 26.73 0.00 0.00 0.00 0.00 0.00 00:24:38.770 00:24:39.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:39.778 Nvme0n1 : 8.00 6844.50 26.74 0.00 0.00 0.00 0.00 0.00 00:24:39.778 [2024-12-16T11:44:45.192Z] =================================================================================================================== 00:24:39.778 [2024-12-16T11:44:45.192Z] Total : 6844.50 26.74 0.00 0.00 0.00 0.00 0.00 00:24:39.778 00:24:40.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:40.714 Nvme0n1 : 9.00 6841.33 26.72 0.00 0.00 0.00 0.00 0.00 00:24:40.714 [2024-12-16T11:44:46.128Z] =================================================================================================================== 00:24:40.714 [2024-12-16T11:44:46.128Z] Total : 6841.33 26.72 0.00 0.00 0.00 0.00 0.00 00:24:40.714 00:24:41.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:41.650 Nvme0n1 : 10.00 6817.30 26.63 0.00 0.00 0.00 0.00 0.00 00:24:41.650 [2024-12-16T11:44:47.064Z] =================================================================================================================== 00:24:41.650 [2024-12-16T11:44:47.064Z] Total : 6817.30 26.63 0.00 0.00 0.00 0.00 0.00 00:24:41.650 00:24:41.650 00:24:41.650 Latency(us) 00:24:41.650 [2024-12-16T11:44:47.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:41.650 Nvme0n1 : 10.02 6817.26 26.63 0.00 0.00 18754.93 10485.76 44326.17 00:24:41.650 [2024-12-16T11:44:47.064Z] =================================================================================================================== 00:24:41.650 [2024-12-16T11:44:47.064Z] Total : 6817.26 26.63 0.00 0.00 18754.93 10485.76 44326.17 00:24:41.650 { 00:24:41.650 "results": [ 00:24:41.650 { 00:24:41.650 "job": "Nvme0n1", 00:24:41.650 "core_mask": "0x2", 00:24:41.650 "workload": "randwrite", 00:24:41.650 "status": "finished", 00:24:41.650 "queue_depth": 128, 00:24:41.650 "io_size": 4096, 00:24:41.650 "runtime": 10.020004, 00:24:41.650 "iops": 6817.2627475997015, 00:24:41.650 "mibps": 26.629932607811334, 00:24:41.650 "io_failed": 0, 00:24:41.650 "io_timeout": 0, 00:24:41.650 "avg_latency_us": 18754.92625696867, 00:24:41.650 "min_latency_us": 10485.76, 00:24:41.650 "max_latency_us": 44326.167272727274 00:24:41.650 } 00:24:41.650 ], 00:24:41.650 "core_count": 1 00:24:41.650 } 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 104272 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 104272 ']' 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 104272 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104272 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:41.650 killing process with pid 104272 00:24:41.650 Received shutdown signal, test time was about 10.000000 seconds 00:24:41.650 00:24:41.650 Latency(us) 00:24:41.650 [2024-12-16T11:44:47.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.650 [2024-12-16T11:44:47.064Z] =================================================================================================================== 00:24:41.650 [2024-12-16T11:44:47.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104272' 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 104272 00:24:41.650 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 104272 00:24:41.909 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:42.476 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:42.735 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:42.735 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:42.994 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:42.994 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:24:42.994 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:43.252 [2024-12-16 11:44:48.482733] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:43.252 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:43.511 2024/12/16 11:44:48 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:cba7345a-cf9f-4da2-9252-f5a792d0f2d2], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:43.511 request: 00:24:43.511 { 00:24:43.511 "method": "bdev_lvol_get_lvstores", 00:24:43.511 "params": { 00:24:43.511 "uuid": "cba7345a-cf9f-4da2-9252-f5a792d0f2d2" 00:24:43.511 } 00:24:43.511 } 00:24:43.511 Got JSON-RPC error response 00:24:43.511 GoRPCClient: error on JSON-RPC call 00:24:43.511 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:24:43.511 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.511 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.511 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.511 11:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:43.770 aio_bdev 00:24:43.770 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a1ff3c4-3fe0-4900-a2fd-e71568017a92 00:24:43.770 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1a1ff3c4-3fe0-4900-a2fd-e71568017a92 00:24:43.770 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:43.770 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:24:43.770 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:43.770 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:43.770 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:44.338 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a1ff3c4-3fe0-4900-a2fd-e71568017a92 -t 2000 00:24:44.338 [ 00:24:44.338 { 00:24:44.338 "aliases": [ 00:24:44.338 "lvs/lvol" 00:24:44.338 ], 00:24:44.338 "assigned_rate_limits": { 00:24:44.338 "r_mbytes_per_sec": 0, 00:24:44.338 "rw_ios_per_sec": 0, 00:24:44.338 "rw_mbytes_per_sec": 0, 00:24:44.338 "w_mbytes_per_sec": 0 00:24:44.338 }, 00:24:44.338 "block_size": 4096, 00:24:44.338 "claimed": false, 00:24:44.338 "driver_specific": { 00:24:44.338 "lvol": { 00:24:44.338 "base_bdev": "aio_bdev", 00:24:44.338 "clone": false, 00:24:44.338 "esnap_clone": false, 00:24:44.338 "lvol_store_uuid": "cba7345a-cf9f-4da2-9252-f5a792d0f2d2", 00:24:44.338 "num_allocated_clusters": 38, 00:24:44.338 "snapshot": false, 00:24:44.338 "thin_provision": false 00:24:44.338 } 00:24:44.338 }, 00:24:44.338 "name": "1a1ff3c4-3fe0-4900-a2fd-e71568017a92", 00:24:44.338 "num_blocks": 38912, 00:24:44.338 "product_name": "Logical Volume", 00:24:44.338 "supported_io_types": { 00:24:44.338 "abort": false, 00:24:44.338 "compare": false, 00:24:44.338 "compare_and_write": false, 00:24:44.338 "copy": false, 00:24:44.338 "flush": false, 00:24:44.338 "get_zone_info": false, 00:24:44.338 "nvme_admin": false, 00:24:44.338 "nvme_io": false, 00:24:44.338 "nvme_io_md": false, 00:24:44.338 "nvme_iov_md": false, 00:24:44.338 "read": true, 00:24:44.338 "reset": true, 00:24:44.338 "seek_data": true, 00:24:44.338 "seek_hole": true, 00:24:44.338 "unmap": true, 00:24:44.338 "write": true, 00:24:44.338 "write_zeroes": true, 00:24:44.338 "zcopy": false, 00:24:44.338 "zone_append": false, 00:24:44.338 "zone_management": false 00:24:44.338 }, 00:24:44.338 "uuid": "1a1ff3c4-3fe0-4900-a2fd-e71568017a92", 00:24:44.338 "zoned": false 00:24:44.338 } 00:24:44.338 ] 00:24:44.338 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:24:44.338 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:44.338 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:44.906 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:44.906 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:44.906 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:44.906 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:44.906 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1a1ff3c4-3fe0-4900-a2fd-e71568017a92 00:24:45.193 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cba7345a-cf9f-4da2-9252-f5a792d0f2d2 00:24:45.452 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:45.712 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:46.279 00:24:46.279 real 0m18.790s 00:24:46.279 user 0m17.626s 00:24:46.279 sys 0m2.551s 00:24:46.279 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.279 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.279 ************************************ 00:24:46.279 END TEST lvs_grow_clean 00:24:46.280 ************************************ 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:46.280 ************************************ 00:24:46.280 START TEST lvs_grow_dirty 00:24:46.280 ************************************ 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:46.280 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:46.538 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:46.538 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:47.106 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=296e0fa0-ae6f-4579-8957-e706fbb62889 00:24:47.106 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:24:47.106 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:47.106 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:47.106 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:47.106 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 296e0fa0-ae6f-4579-8957-e706fbb62889 lvol 150 00:24:47.365 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=80dec4c4-d05d-4325-812e-8f9393d2cf2c 00:24:47.365 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:47.365 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:47.933 [2024-12-16 11:44:53.094662] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:47.933 [2024-12-16 11:44:53.094848] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:47.933 true 00:24:47.933 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:24:47.933 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:48.191 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:48.192 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:48.450 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80dec4c4-d05d-4325-812e-8f9393d2cf2c 00:24:48.708 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:48.967 [2024-12-16 11:44:54.255192] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:48.967 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=104706 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 104706 /var/tmp/bdevperf.sock 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 104706 ']' 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.226 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:49.226 [2024-12-16 11:44:54.627344] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:24:49.226 [2024-12-16 11:44:54.627439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104706 ] 00:24:49.485 [2024-12-16 11:44:54.765165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.485 [2024-12-16 11:44:54.830183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.743 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.743 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:24:49.743 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:50.002 Nvme0n1 00:24:50.002 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:50.260 [ 00:24:50.260 { 00:24:50.260 "aliases": [ 00:24:50.260 "80dec4c4-d05d-4325-812e-8f9393d2cf2c" 00:24:50.260 ], 00:24:50.260 "assigned_rate_limits": { 00:24:50.260 "r_mbytes_per_sec": 0, 00:24:50.260 "rw_ios_per_sec": 0, 00:24:50.260 "rw_mbytes_per_sec": 0, 00:24:50.260 "w_mbytes_per_sec": 0 00:24:50.260 }, 00:24:50.260 "block_size": 4096, 00:24:50.260 "claimed": false, 00:24:50.260 "driver_specific": { 00:24:50.260 "mp_policy": "active_passive", 00:24:50.260 "nvme": [ 00:24:50.260 { 00:24:50.260 "ctrlr_data": { 00:24:50.260 "ana_reporting": false, 00:24:50.260 "cntlid": 1, 00:24:50.260 "firmware_revision": "25.01", 00:24:50.260 "model_number": "SPDK bdev Controller", 00:24:50.260 "multi_ctrlr": true, 00:24:50.260 "oacs": { 00:24:50.260 "firmware": 0, 00:24:50.260 "format": 0, 00:24:50.260 "ns_manage": 0, 00:24:50.260 "security": 0 00:24:50.260 }, 00:24:50.260 "serial_number": "SPDK0", 00:24:50.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.260 "vendor_id": "0x8086" 00:24:50.260 }, 00:24:50.260 "ns_data": { 00:24:50.260 "can_share": true, 00:24:50.260 "id": 1 00:24:50.260 }, 00:24:50.260 "trid": { 00:24:50.260 "adrfam": "IPv4", 00:24:50.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.261 "traddr": "10.0.0.3", 00:24:50.261 "trsvcid": "4420", 00:24:50.261 "trtype": "TCP" 00:24:50.261 }, 00:24:50.261 "vs": { 00:24:50.261 "nvme_version": "1.3" 00:24:50.261 } 00:24:50.261 } 00:24:50.261 ] 00:24:50.261 }, 00:24:50.261 "memory_domains": [ 00:24:50.261 { 00:24:50.261 "dma_device_id": "system", 00:24:50.261 "dma_device_type": 1 00:24:50.261 } 00:24:50.261 ], 00:24:50.261 "name": "Nvme0n1", 00:24:50.261 "num_blocks": 38912, 00:24:50.261 "numa_id": -1, 00:24:50.261 "product_name": "NVMe disk", 00:24:50.261 "supported_io_types": { 00:24:50.261 "abort": true, 00:24:50.261 "compare": true, 00:24:50.261 "compare_and_write": true, 00:24:50.261 "copy": true, 00:24:50.261 "flush": true, 00:24:50.261 "get_zone_info": false, 00:24:50.261 "nvme_admin": true, 00:24:50.261 "nvme_io": true, 00:24:50.261 "nvme_io_md": false, 00:24:50.261 "nvme_iov_md": false, 00:24:50.261 "read": true, 00:24:50.261 "reset": true, 00:24:50.261 "seek_data": false, 00:24:50.261 "seek_hole": false, 00:24:50.261 "unmap": true, 00:24:50.261 "write": true, 00:24:50.261 "write_zeroes": true, 00:24:50.261 "zcopy": false, 00:24:50.261 "zone_append": false, 00:24:50.261 "zone_management": false 00:24:50.261 }, 00:24:50.261 "uuid": "80dec4c4-d05d-4325-812e-8f9393d2cf2c", 00:24:50.261 "zoned": false 00:24:50.261 } 00:24:50.261 ] 00:24:50.261 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=104740 00:24:50.261 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.261 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:50.519 Running I/O for 10 seconds... 00:24:51.455 Latency(us) 00:24:51.456 [2024-12-16T11:44:56.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:51.456 Nvme0n1 : 1.00 7566.00 29.55 0.00 0.00 0.00 0.00 0.00 00:24:51.456 [2024-12-16T11:44:56.870Z] =================================================================================================================== 00:24:51.456 [2024-12-16T11:44:56.870Z] Total : 7566.00 29.55 0.00 0.00 0.00 0.00 0.00 00:24:51.456 00:24:52.391 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:24:52.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:52.391 Nvme0n1 : 2.00 7571.00 29.57 0.00 0.00 0.00 0.00 0.00 00:24:52.391 [2024-12-16T11:44:57.805Z] =================================================================================================================== 00:24:52.391 [2024-12-16T11:44:57.805Z] Total : 7571.00 29.57 0.00 0.00 0.00 0.00 0.00 00:24:52.391 00:24:52.649 true 00:24:52.649 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:24:52.649 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:52.908 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:52.908 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:52.908 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 104740 00:24:53.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:53.476 Nvme0n1 : 3.00 7581.00 29.61 0.00 0.00 0.00 0.00 0.00 00:24:53.476 [2024-12-16T11:44:58.890Z] =================================================================================================================== 00:24:53.476 [2024-12-16T11:44:58.890Z] Total : 7581.00 29.61 0.00 0.00 0.00 0.00 0.00 00:24:53.476 00:24:54.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:54.412 Nvme0n1 : 4.00 7599.50 29.69 0.00 0.00 0.00 0.00 0.00 00:24:54.412 [2024-12-16T11:44:59.826Z] =================================================================================================================== 00:24:54.412 [2024-12-16T11:44:59.826Z] Total : 7599.50 29.69 0.00 0.00 0.00 0.00 0.00 00:24:54.412 00:24:55.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:55.348 Nvme0n1 : 5.00 7535.80 29.44 0.00 0.00 0.00 0.00 0.00 00:24:55.348 [2024-12-16T11:45:00.762Z] =================================================================================================================== 00:24:55.348 [2024-12-16T11:45:00.762Z] Total : 7535.80 29.44 0.00 0.00 0.00 0.00 0.00 00:24:55.348 00:24:56.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:56.727 Nvme0n1 : 6.00 7368.00 28.78 0.00 0.00 0.00 0.00 0.00 00:24:56.727 [2024-12-16T11:45:02.141Z] =================================================================================================================== 00:24:56.727 [2024-12-16T11:45:02.141Z] Total : 7368.00 28.78 0.00 0.00 0.00 0.00 0.00 00:24:56.727 00:24:57.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:57.663 Nvme0n1 : 7.00 7341.57 28.68 0.00 0.00 0.00 0.00 0.00 00:24:57.663 [2024-12-16T11:45:03.077Z] =================================================================================================================== 00:24:57.663 [2024-12-16T11:45:03.077Z] Total : 7341.57 28.68 0.00 0.00 0.00 0.00 0.00 00:24:57.663 00:24:58.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:58.600 Nvme0n1 : 8.00 7353.25 28.72 0.00 0.00 0.00 0.00 0.00 00:24:58.600 [2024-12-16T11:45:04.014Z] =================================================================================================================== 00:24:58.600 [2024-12-16T11:45:04.014Z] Total : 7353.25 28.72 0.00 0.00 0.00 0.00 0.00 00:24:58.600 00:24:59.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:59.537 Nvme0n1 : 9.00 7366.78 28.78 0.00 0.00 0.00 0.00 0.00 00:24:59.537 [2024-12-16T11:45:04.951Z] =================================================================================================================== 00:24:59.537 [2024-12-16T11:45:04.951Z] Total : 7366.78 28.78 0.00 0.00 0.00 0.00 0.00 00:24:59.537 00:25:00.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:00.472 Nvme0n1 : 10.00 7392.70 28.88 0.00 0.00 0.00 0.00 0.00 00:25:00.472 [2024-12-16T11:45:05.886Z] =================================================================================================================== 00:25:00.472 [2024-12-16T11:45:05.886Z] Total : 7392.70 28.88 0.00 0.00 0.00 0.00 0.00 00:25:00.472 00:25:00.472 00:25:00.472 Latency(us) 00:25:00.472 [2024-12-16T11:45:05.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:00.472 Nvme0n1 : 10.01 7395.82 28.89 0.00 0.00 17294.79 6315.29 176351.42 00:25:00.472 [2024-12-16T11:45:05.886Z] =================================================================================================================== 00:25:00.472 [2024-12-16T11:45:05.886Z] Total : 7395.82 28.89 0.00 0.00 17294.79 6315.29 176351.42 00:25:00.472 { 00:25:00.472 "results": [ 00:25:00.472 { 00:25:00.472 "job": "Nvme0n1", 00:25:00.472 "core_mask": "0x2", 00:25:00.472 "workload": "randwrite", 00:25:00.472 "status": "finished", 00:25:00.472 "queue_depth": 128, 00:25:00.472 "io_size": 4096, 00:25:00.472 "runtime": 10.013083, 00:25:00.472 "iops": 7395.824043404014, 00:25:00.472 "mibps": 28.88993766954693, 00:25:00.472 "io_failed": 0, 00:25:00.472 "io_timeout": 0, 00:25:00.472 "avg_latency_us": 17294.787928677088, 00:25:00.472 "min_latency_us": 6315.2872727272725, 00:25:00.472 "max_latency_us": 176351.41818181818 00:25:00.472 } 00:25:00.472 ], 00:25:00.472 "core_count": 1 00:25:00.472 } 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 104706 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 104706 ']' 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 104706 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104706 00:25:00.472 killing process with pid 104706 00:25:00.472 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.472 00:25:00.472 Latency(us) 00:25:00.472 [2024-12-16T11:45:05.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.472 [2024-12-16T11:45:05.886Z] =================================================================================================================== 00:25:00.472 [2024-12-16T11:45:05.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104706' 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 104706 00:25:00.472 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 104706 00:25:00.731 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:01.299 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:01.557 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:01.557 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 104125 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 104125 00:25:01.816 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 104125 Killed "${NVMF_APP[@]}" "$@" 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=104895 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 104895 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 104895 ']' 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.816 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:01.816 [2024-12-16 11:45:07.114505] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:01.816 [2024-12-16 11:45:07.115881] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:01.816 [2024-12-16 11:45:07.115966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.074 [2024-12-16 11:45:07.275901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.074 [2024-12-16 11:45:07.369447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.074 [2024-12-16 11:45:07.369525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.074 [2024-12-16 11:45:07.369546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.074 [2024-12-16 11:45:07.369562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.074 [2024-12-16 11:45:07.369576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.074 [2024-12-16 11:45:07.370155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.074 [2024-12-16 11:45:07.470872] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:02.074 [2024-12-16 11:45:07.471220] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:03.010 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.010 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:25:03.010 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.010 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.010 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:03.010 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.010 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:03.010 [2024-12-16 11:45:08.401443] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:03.010 [2024-12-16 11:45:08.402086] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:03.010 [2024-12-16 11:45:08.402436] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 80dec4c4-d05d-4325-812e-8f9393d2cf2c 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=80dec4c4-d05d-4325-812e-8f9393d2cf2c 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:03.270 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:03.529 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80dec4c4-d05d-4325-812e-8f9393d2cf2c -t 2000 00:25:03.787 [ 00:25:03.787 { 00:25:03.787 "aliases": [ 00:25:03.787 "lvs/lvol" 00:25:03.787 ], 00:25:03.787 "assigned_rate_limits": { 00:25:03.787 "r_mbytes_per_sec": 0, 00:25:03.787 "rw_ios_per_sec": 0, 00:25:03.787 "rw_mbytes_per_sec": 0, 00:25:03.787 "w_mbytes_per_sec": 0 00:25:03.787 }, 00:25:03.787 "block_size": 4096, 00:25:03.787 "claimed": false, 00:25:03.787 "driver_specific": { 00:25:03.787 "lvol": { 00:25:03.787 "base_bdev": "aio_bdev", 00:25:03.787 "clone": false, 00:25:03.787 "esnap_clone": false, 00:25:03.787 "lvol_store_uuid": "296e0fa0-ae6f-4579-8957-e706fbb62889", 00:25:03.787 "num_allocated_clusters": 38, 00:25:03.787 "snapshot": false, 00:25:03.787 "thin_provision": false 00:25:03.787 } 00:25:03.787 }, 00:25:03.787 "name": "80dec4c4-d05d-4325-812e-8f9393d2cf2c", 00:25:03.787 "num_blocks": 38912, 00:25:03.787 "product_name": "Logical Volume", 00:25:03.787 "supported_io_types": { 00:25:03.787 "abort": false, 00:25:03.787 "compare": false, 00:25:03.787 "compare_and_write": false, 00:25:03.787 "copy": false, 00:25:03.787 "flush": false, 00:25:03.787 "get_zone_info": false, 00:25:03.787 "nvme_admin": false, 00:25:03.787 "nvme_io": false, 00:25:03.787 "nvme_io_md": false, 00:25:03.787 "nvme_iov_md": false, 00:25:03.787 "read": true, 00:25:03.787 "reset": true, 00:25:03.787 "seek_data": true, 00:25:03.787 "seek_hole": true, 00:25:03.787 "unmap": true, 00:25:03.787 "write": true, 00:25:03.787 "write_zeroes": true, 00:25:03.787 "zcopy": false, 00:25:03.787 "zone_append": false, 00:25:03.787 "zone_management": false 00:25:03.787 }, 00:25:03.787 "uuid": "80dec4c4-d05d-4325-812e-8f9393d2cf2c", 00:25:03.787 "zoned": false 00:25:03.787 } 00:25:03.787 ] 00:25:03.787 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:03.788 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:03.788 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:25:04.046 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:25:04.046 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:04.046 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:25:04.304 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:25:04.304 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:04.563 [2024-12-16 11:45:09.887204] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:04.563 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:04.821 2024/12/16 11:45:10 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:296e0fa0-ae6f-4579-8957-e706fbb62889], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:25:04.821 request: 00:25:04.821 { 00:25:04.821 "method": "bdev_lvol_get_lvstores", 00:25:04.821 "params": { 00:25:04.821 "uuid": "296e0fa0-ae6f-4579-8957-e706fbb62889" 00:25:04.821 } 00:25:04.821 } 00:25:04.821 Got JSON-RPC error response 00:25:04.821 GoRPCClient: error on JSON-RPC call 00:25:04.821 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:25:04.821 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.821 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.822 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.822 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:05.080 aio_bdev 00:25:05.080 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80dec4c4-d05d-4325-812e-8f9393d2cf2c 00:25:05.080 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=80dec4c4-d05d-4325-812e-8f9393d2cf2c 00:25:05.080 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:05.080 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:05.080 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:05.080 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:05.080 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:05.647 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80dec4c4-d05d-4325-812e-8f9393d2cf2c -t 2000 00:25:05.647 [ 00:25:05.647 { 00:25:05.647 "aliases": [ 00:25:05.647 "lvs/lvol" 00:25:05.647 ], 00:25:05.647 "assigned_rate_limits": { 00:25:05.647 "r_mbytes_per_sec": 0, 00:25:05.647 "rw_ios_per_sec": 0, 00:25:05.647 "rw_mbytes_per_sec": 0, 00:25:05.647 "w_mbytes_per_sec": 0 00:25:05.647 }, 00:25:05.647 "block_size": 4096, 00:25:05.647 "claimed": false, 00:25:05.647 "driver_specific": { 00:25:05.647 "lvol": { 00:25:05.647 "base_bdev": "aio_bdev", 00:25:05.647 "clone": false, 00:25:05.647 "esnap_clone": false, 00:25:05.647 "lvol_store_uuid": "296e0fa0-ae6f-4579-8957-e706fbb62889", 00:25:05.647 "num_allocated_clusters": 38, 00:25:05.647 "snapshot": false, 00:25:05.647 "thin_provision": false 00:25:05.647 } 00:25:05.647 }, 00:25:05.647 "name": "80dec4c4-d05d-4325-812e-8f9393d2cf2c", 00:25:05.647 "num_blocks": 38912, 00:25:05.647 "product_name": "Logical Volume", 00:25:05.647 "supported_io_types": { 00:25:05.647 "abort": false, 00:25:05.647 "compare": false, 00:25:05.647 "compare_and_write": false, 00:25:05.647 "copy": false, 00:25:05.647 "flush": false, 00:25:05.647 "get_zone_info": false, 00:25:05.647 "nvme_admin": false, 00:25:05.647 "nvme_io": false, 00:25:05.647 "nvme_io_md": false, 00:25:05.647 "nvme_iov_md": false, 00:25:05.647 "read": true, 00:25:05.647 "reset": true, 00:25:05.647 "seek_data": true, 00:25:05.647 "seek_hole": true, 00:25:05.647 "unmap": true, 00:25:05.647 "write": true, 00:25:05.647 "write_zeroes": true, 00:25:05.647 "zcopy": false, 00:25:05.647 "zone_append": false, 00:25:05.647 "zone_management": false 00:25:05.647 }, 00:25:05.647 "uuid": "80dec4c4-d05d-4325-812e-8f9393d2cf2c", 00:25:05.647 "zoned": false 00:25:05.647 } 00:25:05.647 ] 00:25:05.905 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:05.905 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:05.905 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:06.163 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:06.163 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:06.163 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:06.423 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:06.423 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 80dec4c4-d05d-4325-812e-8f9393d2cf2c 00:25:06.685 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 296e0fa0-ae6f-4579-8957-e706fbb62889 00:25:06.944 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:07.202 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:07.771 00:25:07.771 real 0m21.281s 00:25:07.771 user 0m28.143s 00:25:07.771 sys 0m8.812s 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:07.771 ************************************ 00:25:07.771 END TEST lvs_grow_dirty 00:25:07.771 ************************************ 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:07.771 nvmf_trace.0 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.771 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.030 rmmod nvme_tcp 00:25:08.030 rmmod nvme_fabrics 00:25:08.030 rmmod nvme_keyring 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 104895 ']' 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 104895 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 104895 ']' 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 104895 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104895 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.030 killing process with pid 104895 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104895' 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 104895 00:25:08.030 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 104895 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:08.290 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:25:08.549 00:25:08.549 real 0m42.449s 00:25:08.549 user 0m47.131s 00:25:08.549 sys 0m12.307s 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:08.549 ************************************ 00:25:08.549 END TEST nvmf_lvs_grow 00:25:08.549 ************************************ 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:08.549 ************************************ 00:25:08.549 START TEST nvmf_bdev_io_wait 00:25:08.549 ************************************ 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:08.549 * Looking for test storage... 00:25:08.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:08.549 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:25:08.808 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:08.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.809 --rc genhtml_branch_coverage=1 00:25:08.809 --rc genhtml_function_coverage=1 00:25:08.809 --rc genhtml_legend=1 00:25:08.809 --rc geninfo_all_blocks=1 00:25:08.809 --rc geninfo_unexecuted_blocks=1 00:25:08.809 00:25:08.809 ' 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:08.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.809 --rc genhtml_branch_coverage=1 00:25:08.809 --rc genhtml_function_coverage=1 00:25:08.809 --rc genhtml_legend=1 00:25:08.809 --rc geninfo_all_blocks=1 00:25:08.809 --rc geninfo_unexecuted_blocks=1 00:25:08.809 00:25:08.809 ' 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:08.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.809 --rc genhtml_branch_coverage=1 00:25:08.809 --rc genhtml_function_coverage=1 00:25:08.809 --rc genhtml_legend=1 00:25:08.809 --rc geninfo_all_blocks=1 00:25:08.809 --rc geninfo_unexecuted_blocks=1 00:25:08.809 00:25:08.809 ' 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:08.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.809 --rc genhtml_branch_coverage=1 00:25:08.809 --rc genhtml_function_coverage=1 00:25:08.809 --rc genhtml_legend=1 00:25:08.809 --rc geninfo_all_blocks=1 00:25:08.809 --rc geninfo_unexecuted_blocks=1 00:25:08.809 00:25:08.809 ' 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.809 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:08.809 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:08.810 Cannot find device "nvmf_init_br" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:08.810 Cannot find device "nvmf_init_br2" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:08.810 Cannot find device "nvmf_tgt_br" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:08.810 Cannot find device "nvmf_tgt_br2" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:08.810 Cannot find device "nvmf_init_br" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:08.810 Cannot find device "nvmf_init_br2" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:08.810 Cannot find device "nvmf_tgt_br" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:08.810 Cannot find device "nvmf_tgt_br2" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:08.810 Cannot find device "nvmf_br" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:08.810 Cannot find device "nvmf_init_if" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:08.810 Cannot find device "nvmf_init_if2" 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:08.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:08.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:08.810 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:09.069 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:09.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:09.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:25:09.070 00:25:09.070 --- 10.0.0.3 ping statistics --- 00:25:09.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.070 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:09.070 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:09.070 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:25:09.070 00:25:09.070 --- 10.0.0.4 ping statistics --- 00:25:09.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.070 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:09.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:09.070 00:25:09.070 --- 10.0.0.1 ping statistics --- 00:25:09.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.070 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:09.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:25:09.070 00:25:09.070 --- 10.0.0.2 ping statistics --- 00:25:09.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.070 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:09.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=105371 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 105371 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 105371 ']' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.070 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:09.329 [2024-12-16 11:45:14.488838] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:09.329 [2024-12-16 11:45:14.490559] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:09.329 [2024-12-16 11:45:14.490805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.329 [2024-12-16 11:45:14.646282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:09.329 [2024-12-16 11:45:14.720073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.329 [2024-12-16 11:45:14.720424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.329 [2024-12-16 11:45:14.720585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.329 [2024-12-16 11:45:14.720741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.329 [2024-12-16 11:45:14.720793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.329 [2024-12-16 11:45:14.722199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.329 [2024-12-16 11:45:14.722328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.329 [2024-12-16 11:45:14.722470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:09.329 [2024-12-16 11:45:14.722476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.329 [2024-12-16 11:45:14.724041] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.264 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 [2024-12-16 11:45:15.696136] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:10.524 [2024-12-16 11:45:15.696410] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:10.524 [2024-12-16 11:45:15.697688] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:10.524 [2024-12-16 11:45:15.698093] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 [2024-12-16 11:45:15.704511] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 Malloc0 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 [2024-12-16 11:45:15.772491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=105424 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=105426 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:10.524 { 00:25:10.524 "params": { 00:25:10.524 "name": "Nvme$subsystem", 00:25:10.524 "trtype": "$TEST_TRANSPORT", 00:25:10.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.524 "adrfam": "ipv4", 00:25:10.524 "trsvcid": "$NVMF_PORT", 00:25:10.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.524 "hdgst": ${hdgst:-false}, 00:25:10.524 "ddgst": ${ddgst:-false} 00:25:10.524 }, 00:25:10.524 "method": "bdev_nvme_attach_controller" 00:25:10.524 } 00:25:10.524 EOF 00:25:10.524 )") 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:25:10.524 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=105428 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:10.525 { 00:25:10.525 "params": { 00:25:10.525 "name": "Nvme$subsystem", 00:25:10.525 "trtype": "$TEST_TRANSPORT", 00:25:10.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.525 "adrfam": "ipv4", 00:25:10.525 "trsvcid": "$NVMF_PORT", 00:25:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.525 "hdgst": ${hdgst:-false}, 00:25:10.525 "ddgst": ${ddgst:-false} 00:25:10.525 }, 00:25:10.525 "method": "bdev_nvme_attach_controller" 00:25:10.525 } 00:25:10.525 EOF 00:25:10.525 )") 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=105431 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:10.525 { 00:25:10.525 "params": { 00:25:10.525 "name": "Nvme$subsystem", 00:25:10.525 "trtype": "$TEST_TRANSPORT", 00:25:10.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.525 "adrfam": "ipv4", 00:25:10.525 "trsvcid": "$NVMF_PORT", 00:25:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.525 "hdgst": ${hdgst:-false}, 00:25:10.525 "ddgst": ${ddgst:-false} 00:25:10.525 }, 00:25:10.525 "method": "bdev_nvme_attach_controller" 00:25:10.525 } 00:25:10.525 EOF 00:25:10.525 )") 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:10.525 "params": { 00:25:10.525 "name": "Nvme1", 00:25:10.525 "trtype": "tcp", 00:25:10.525 "traddr": "10.0.0.3", 00:25:10.525 "adrfam": "ipv4", 00:25:10.525 "trsvcid": "4420", 00:25:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.525 "hdgst": false, 00:25:10.525 "ddgst": false 00:25:10.525 }, 00:25:10.525 "method": "bdev_nvme_attach_controller" 00:25:10.525 }' 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:10.525 "params": { 00:25:10.525 "name": "Nvme1", 00:25:10.525 "trtype": "tcp", 00:25:10.525 "traddr": "10.0.0.3", 00:25:10.525 "adrfam": "ipv4", 00:25:10.525 "trsvcid": "4420", 00:25:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.525 "hdgst": false, 00:25:10.525 "ddgst": false 00:25:10.525 }, 00:25:10.525 "method": "bdev_nvme_attach_controller" 00:25:10.525 }' 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:10.525 { 00:25:10.525 "params": { 00:25:10.525 "name": "Nvme$subsystem", 00:25:10.525 "trtype": "$TEST_TRANSPORT", 00:25:10.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.525 "adrfam": "ipv4", 00:25:10.525 "trsvcid": "$NVMF_PORT", 00:25:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.525 "hdgst": ${hdgst:-false}, 00:25:10.525 "ddgst": ${ddgst:-false} 00:25:10.525 }, 00:25:10.525 "method": "bdev_nvme_attach_controller" 00:25:10.525 } 00:25:10.525 EOF 00:25:10.525 )") 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:10.525 "params": { 00:25:10.525 "name": "Nvme1", 00:25:10.525 "trtype": "tcp", 00:25:10.525 "traddr": "10.0.0.3", 00:25:10.525 "adrfam": "ipv4", 00:25:10.525 "trsvcid": "4420", 00:25:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.525 "hdgst": false, 00:25:10.525 "ddgst": false 00:25:10.525 }, 00:25:10.525 "method": "bdev_nvme_attach_controller" 00:25:10.525 }' 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:10.525 "params": { 00:25:10.525 "name": "Nvme1", 00:25:10.525 "trtype": "tcp", 00:25:10.525 "traddr": "10.0.0.3", 00:25:10.525 "adrfam": "ipv4", 00:25:10.525 "trsvcid": "4420", 00:25:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.525 "hdgst": false, 00:25:10.525 "ddgst": false 00:25:10.525 }, 00:25:10.525 "method": "bdev_nvme_attach_controller" 00:25:10.525 }' 00:25:10.525 [2024-12-16 11:45:15.841060] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:10.525 [2024-12-16 11:45:15.841060] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:10.525 [2024-12-16 11:45:15.841167] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-16 11:45:15.841167] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --fil.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:25:10.525 e-prefix=spdk1 --proc-type=auto ] 00:25:10.525 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 105424 00:25:10.525 [2024-12-16 11:45:15.846031] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:10.525 [2024-12-16 11:45:15.846258] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:25:10.525 [2024-12-16 11:45:15.866740] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:10.525 [2024-12-16 11:45:15.866807] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:25:10.784 [2024-12-16 11:45:16.073740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.784 [2024-12-16 11:45:16.138679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:25:10.784 [2024-12-16 11:45:16.151674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.043 [2024-12-16 11:45:16.207240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:25:11.043 [2024-12-16 11:45:16.239068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.043 [2024-12-16 11:45:16.298225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:25:11.043 Running I/O for 1 seconds... 00:25:11.043 [2024-12-16 11:45:16.304188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.043 Running I/O for 1 seconds... 00:25:11.043 [2024-12-16 11:45:16.357286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:25:11.043 Running I/O for 1 seconds... 00:25:11.302 Running I/O for 1 seconds... 00:25:12.251 186192.00 IOPS, 727.31 MiB/s 00:25:12.251 Latency(us) 00:25:12.251 [2024-12-16T11:45:17.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.251 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:25:12.251 Nvme1n1 : 1.00 185833.54 725.91 0.00 0.00 685.18 286.72 1891.61 00:25:12.251 [2024-12-16T11:45:17.665Z] =================================================================================================================== 00:25:12.251 [2024-12-16T11:45:17.665Z] Total : 185833.54 725.91 0.00 0.00 685.18 286.72 1891.61 00:25:12.251 6064.00 IOPS, 23.69 MiB/s 00:25:12.251 Latency(us) 00:25:12.251 [2024-12-16T11:45:17.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.252 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:25:12.252 Nvme1n1 : 1.02 6055.81 23.66 0.00 0.00 20952.53 4110.89 36461.85 00:25:12.252 [2024-12-16T11:45:17.666Z] =================================================================================================================== 00:25:12.252 [2024-12-16T11:45:17.666Z] Total : 6055.81 23.66 0.00 0.00 20952.53 4110.89 36461.85 00:25:12.252 5754.00 IOPS, 22.48 MiB/s 00:25:12.252 Latency(us) 00:25:12.252 [2024-12-16T11:45:17.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.252 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:25:12.252 Nvme1n1 : 1.01 5859.99 22.89 0.00 0.00 21759.42 6017.40 37415.10 00:25:12.252 [2024-12-16T11:45:17.666Z] =================================================================================================================== 00:25:12.252 [2024-12-16T11:45:17.666Z] Total : 5859.99 22.89 0.00 0.00 21759.42 6017.40 37415.10 00:25:12.252 8807.00 IOPS, 34.40 MiB/s 00:25:12.252 Latency(us) 00:25:12.252 [2024-12-16T11:45:17.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.252 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:25:12.252 Nvme1n1 : 1.01 8889.60 34.73 0.00 0.00 14349.32 2353.34 20256.58 00:25:12.252 [2024-12-16T11:45:17.666Z] =================================================================================================================== 00:25:12.252 [2024-12-16T11:45:17.666Z] Total : 8889.60 34.73 0.00 0.00 14349.32 2353.34 20256.58 00:25:12.252 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 105426 00:25:12.252 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 105428 00:25:12.252 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 105431 00:25:12.252 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.252 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.252 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.522 rmmod nvme_tcp 00:25:12.522 rmmod nvme_fabrics 00:25:12.522 rmmod nvme_keyring 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 105371 ']' 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 105371 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 105371 ']' 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 105371 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105371 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.522 killing process with pid 105371 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105371' 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 105371 00:25:12.522 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 105371 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:12.781 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.781 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:25:13.041 00:25:13.041 real 0m4.397s 00:25:13.041 user 0m13.425s 00:25:13.041 sys 0m2.286s 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.041 ************************************ 00:25:13.041 END TEST nvmf_bdev_io_wait 00:25:13.041 ************************************ 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:13.041 ************************************ 00:25:13.041 START TEST nvmf_queue_depth 00:25:13.041 ************************************ 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:13.041 * Looking for test storage... 00:25:13.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.041 --rc genhtml_branch_coverage=1 00:25:13.041 --rc genhtml_function_coverage=1 00:25:13.041 --rc genhtml_legend=1 00:25:13.041 --rc geninfo_all_blocks=1 00:25:13.041 --rc geninfo_unexecuted_blocks=1 00:25:13.041 00:25:13.041 ' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.041 --rc genhtml_branch_coverage=1 00:25:13.041 --rc genhtml_function_coverage=1 00:25:13.041 --rc genhtml_legend=1 00:25:13.041 --rc geninfo_all_blocks=1 00:25:13.041 --rc geninfo_unexecuted_blocks=1 00:25:13.041 00:25:13.041 ' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.041 --rc genhtml_branch_coverage=1 00:25:13.041 --rc genhtml_function_coverage=1 00:25:13.041 --rc genhtml_legend=1 00:25:13.041 --rc geninfo_all_blocks=1 00:25:13.041 --rc geninfo_unexecuted_blocks=1 00:25:13.041 00:25:13.041 ' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.041 --rc genhtml_branch_coverage=1 00:25:13.041 --rc genhtml_function_coverage=1 00:25:13.041 --rc genhtml_legend=1 00:25:13.041 --rc geninfo_all_blocks=1 00:25:13.041 --rc geninfo_unexecuted_blocks=1 00:25:13.041 00:25:13.041 ' 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.041 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.042 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:13.302 Cannot find device "nvmf_init_br" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:13.302 Cannot find device "nvmf_init_br2" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:13.302 Cannot find device "nvmf_tgt_br" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:13.302 Cannot find device "nvmf_tgt_br2" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:13.302 Cannot find device "nvmf_init_br" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:13.302 Cannot find device "nvmf_init_br2" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:13.302 Cannot find device "nvmf_tgt_br" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:13.302 Cannot find device "nvmf_tgt_br2" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:13.302 Cannot find device "nvmf_br" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:13.302 Cannot find device "nvmf_init_if" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:13.302 Cannot find device "nvmf_init_if2" 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.302 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:13.562 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.562 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:25:13.562 00:25:13.562 --- 10.0.0.3 ping statistics --- 00:25:13.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.562 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:13.562 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:13.562 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:25:13.562 00:25:13.562 --- 10.0.0.4 ping statistics --- 00:25:13.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.562 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:13.562 00:25:13.562 --- 10.0.0.1 ping statistics --- 00:25:13.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.562 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:13.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:25:13.562 00:25:13.562 --- 10.0.0.2 ping statistics --- 00:25:13.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.562 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=105709 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 105709 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 105709 ']' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.562 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:13.562 [2024-12-16 11:45:18.909694] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:13.562 [2024-12-16 11:45:18.911044] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:13.562 [2024-12-16 11:45:18.911128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.821 [2024-12-16 11:45:19.067930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.821 [2024-12-16 11:45:19.140588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.821 [2024-12-16 11:45:19.140661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.821 [2024-12-16 11:45:19.140675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.821 [2024-12-16 11:45:19.140686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.821 [2024-12-16 11:45:19.140695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.821 [2024-12-16 11:45:19.141217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.081 [2024-12-16 11:45:19.277557] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:14.081 [2024-12-16 11:45:19.277941] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.081 [2024-12-16 11:45:19.366210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.081 Malloc0 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.081 [2024-12-16 11:45:19.438308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=105751 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 105751 /var/tmp/bdevperf.sock 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 105751 ']' 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.081 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.338 [2024-12-16 11:45:19.503895] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:14.338 [2024-12-16 11:45:19.504000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105751 ] 00:25:14.338 [2024-12-16 11:45:19.654611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.338 [2024-12-16 11:45:19.719992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.596 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.596 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:14.596 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.596 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.596 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:14.596 NVMe0n1 00:25:14.596 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.596 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.855 Running I/O for 10 seconds... 00:25:16.727 8192.00 IOPS, 32.00 MiB/s [2024-12-16T11:45:23.076Z] 8431.00 IOPS, 32.93 MiB/s [2024-12-16T11:45:24.478Z] 8584.00 IOPS, 33.53 MiB/s [2024-12-16T11:45:25.414Z] 8715.50 IOPS, 34.04 MiB/s [2024-12-16T11:45:26.350Z] 8821.00 IOPS, 34.46 MiB/s [2024-12-16T11:45:27.286Z] 8898.67 IOPS, 34.76 MiB/s [2024-12-16T11:45:28.223Z] 8917.29 IOPS, 34.83 MiB/s [2024-12-16T11:45:29.158Z] 8838.25 IOPS, 34.52 MiB/s [2024-12-16T11:45:30.096Z] 8871.33 IOPS, 34.65 MiB/s [2024-12-16T11:45:30.354Z] 8912.00 IOPS, 34.81 MiB/s 00:25:24.940 Latency(us) 00:25:24.940 [2024-12-16T11:45:30.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.940 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:25:24.940 Verification LBA range: start 0x0 length 0x4000 00:25:24.940 NVMe0n1 : 10.09 8936.49 34.91 0.00 0.00 114056.53 31218.97 86745.83 00:25:24.940 [2024-12-16T11:45:30.354Z] =================================================================================================================== 00:25:24.940 [2024-12-16T11:45:30.354Z] Total : 8936.49 34.91 0.00 0.00 114056.53 31218.97 86745.83 00:25:24.940 { 00:25:24.940 "results": [ 00:25:24.940 { 00:25:24.940 "job": "NVMe0n1", 00:25:24.940 "core_mask": "0x1", 00:25:24.940 "workload": "verify", 00:25:24.941 "status": "finished", 00:25:24.941 "verify_range": { 00:25:24.941 "start": 0, 00:25:24.941 "length": 16384 00:25:24.941 }, 00:25:24.941 "queue_depth": 1024, 00:25:24.941 "io_size": 4096, 00:25:24.941 "runtime": 10.086283, 00:25:24.941 "iops": 8936.493255245763, 00:25:24.941 "mibps": 34.90817677830376, 00:25:24.941 "io_failed": 0, 00:25:24.941 "io_timeout": 0, 00:25:24.941 "avg_latency_us": 114056.5331405472, 00:25:24.941 "min_latency_us": 31218.967272727274, 00:25:24.941 "max_latency_us": 86745.83272727273 00:25:24.941 } 00:25:24.941 ], 00:25:24.941 "core_count": 1 00:25:24.941 } 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 105751 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 105751 ']' 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 105751 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105751 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105751' 00:25:24.941 killing process with pid 105751 00:25:24.941 Received shutdown signal, test time was about 10.000000 seconds 00:25:24.941 00:25:24.941 Latency(us) 00:25:24.941 [2024-12-16T11:45:30.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.941 [2024-12-16T11:45:30.355Z] =================================================================================================================== 00:25:24.941 [2024-12-16T11:45:30.355Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 105751 00:25:24.941 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 105751 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.199 rmmod nvme_tcp 00:25:25.199 rmmod nvme_fabrics 00:25:25.199 rmmod nvme_keyring 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 105709 ']' 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 105709 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 105709 ']' 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 105709 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105709 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:25.199 killing process with pid 105709 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105709' 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 105709 00:25:25.199 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 105709 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:25.458 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:25.717 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:25:25.717 00:25:25.717 real 0m12.828s 00:25:25.717 user 0m20.939s 00:25:25.717 sys 0m2.537s 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:25.717 ************************************ 00:25:25.717 END TEST nvmf_queue_depth 00:25:25.717 ************************************ 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.717 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:25.977 ************************************ 00:25:25.977 START TEST nvmf_target_multipath 00:25:25.977 ************************************ 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:25.977 * Looking for test storage... 00:25:25.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.977 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:25.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.977 --rc genhtml_branch_coverage=1 00:25:25.977 --rc genhtml_function_coverage=1 00:25:25.977 --rc genhtml_legend=1 00:25:25.977 --rc geninfo_all_blocks=1 00:25:25.977 --rc geninfo_unexecuted_blocks=1 00:25:25.977 00:25:25.977 ' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:25.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.978 --rc genhtml_branch_coverage=1 00:25:25.978 --rc genhtml_function_coverage=1 00:25:25.978 --rc genhtml_legend=1 00:25:25.978 --rc geninfo_all_blocks=1 00:25:25.978 --rc geninfo_unexecuted_blocks=1 00:25:25.978 00:25:25.978 ' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:25.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.978 --rc genhtml_branch_coverage=1 00:25:25.978 --rc genhtml_function_coverage=1 00:25:25.978 --rc genhtml_legend=1 00:25:25.978 --rc geninfo_all_blocks=1 00:25:25.978 --rc geninfo_unexecuted_blocks=1 00:25:25.978 00:25:25.978 ' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:25.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.978 --rc genhtml_branch_coverage=1 00:25:25.978 --rc genhtml_function_coverage=1 00:25:25.978 --rc genhtml_legend=1 00:25:25.978 --rc geninfo_all_blocks=1 00:25:25.978 --rc geninfo_unexecuted_blocks=1 00:25:25.978 00:25:25.978 ' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:25.978 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:25.979 Cannot find device "nvmf_init_br" 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:25.979 Cannot find device "nvmf_init_br2" 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:25.979 Cannot find device "nvmf_tgt_br" 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:25.979 Cannot find device "nvmf_tgt_br2" 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:25:25.979 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:26.237 Cannot find device "nvmf_init_br" 00:25:26.237 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:26.238 Cannot find device "nvmf_init_br2" 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:26.238 Cannot find device "nvmf_tgt_br" 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:26.238 Cannot find device "nvmf_tgt_br2" 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:26.238 Cannot find device "nvmf_br" 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:26.238 Cannot find device "nvmf_init_if" 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:26.238 Cannot find device "nvmf_init_if2" 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:26.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:26.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:26.238 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:26.497 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:26.497 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:25:26.497 00:25:26.497 --- 10.0.0.3 ping statistics --- 00:25:26.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.497 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:26.497 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:26.497 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:25:26.497 00:25:26.497 --- 10.0.0.4 ping statistics --- 00:25:26.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.497 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:26.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:26.497 00:25:26.497 --- 10.0.0.1 ping statistics --- 00:25:26.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.497 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:26.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:25:26.497 00:25:26.497 --- 10.0.0.2 ping statistics --- 00:25:26.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.497 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.497 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=106113 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 106113 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 106113 ']' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.498 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:26.498 [2024-12-16 11:45:31.880266] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:26.498 [2024-12-16 11:45:31.881819] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:26.498 [2024-12-16 11:45:31.881900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.756 [2024-12-16 11:45:32.028549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.756 [2024-12-16 11:45:32.090912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.756 [2024-12-16 11:45:32.090969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.756 [2024-12-16 11:45:32.090981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.756 [2024-12-16 11:45:32.090990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.756 [2024-12-16 11:45:32.090997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.756 [2024-12-16 11:45:32.092352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.756 [2024-12-16 11:45:32.092478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.756 [2024-12-16 11:45:32.092597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.756 [2024-12-16 11:45:32.092598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.015 [2024-12-16 11:45:32.191113] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:27.016 [2024-12-16 11:45:32.191689] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:27.016 [2024-12-16 11:45:32.191832] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:27.016 [2024-12-16 11:45:32.192074] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:27.016 [2024-12-16 11:45:32.193231] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:27.582 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.582 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:27.582 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.582 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.582 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:27.841 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.841 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:28.099 [2024-12-16 11:45:33.326140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.099 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:28.358 Malloc0 00:25:28.358 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:25:28.616 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:28.874 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:29.133 [2024-12-16 11:45:34.426228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:29.133 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:25:29.391 [2024-12-16 11:45:34.770163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:25:29.391 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:25:29.651 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:25:29.651 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:25:29.651 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:25:29.651 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.651 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:29.651 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:25:32.227 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=106259 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:32.228 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:25:32.228 [global] 00:25:32.228 thread=1 00:25:32.228 invalidate=1 00:25:32.228 rw=randrw 00:25:32.228 time_based=1 00:25:32.228 runtime=6 00:25:32.228 ioengine=libaio 00:25:32.228 direct=1 00:25:32.228 bs=4096 00:25:32.228 iodepth=128 00:25:32.228 norandommap=0 00:25:32.228 numjobs=1 00:25:32.228 00:25:32.228 verify_dump=1 00:25:32.228 verify_backlog=512 00:25:32.228 verify_state_save=0 00:25:32.228 do_verify=1 00:25:32.228 verify=crc32c-intel 00:25:32.228 [job0] 00:25:32.228 filename=/dev/nvme0n1 00:25:32.228 Could not set queue depth (nvme0n1) 00:25:32.228 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:32.228 fio-3.35 00:25:32.228 Starting 1 thread 00:25:32.795 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:33.054 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:33.313 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:33.314 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:34.250 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:34.250 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:34.250 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:34.250 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:34.816 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:34.816 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:36.190 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:36.190 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:36.190 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:36.190 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 106259 00:25:38.127 00:25:38.127 job0: (groupid=0, jobs=1): err= 0: pid=106280: Mon Dec 16 11:45:43 2024 00:25:38.127 read: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(254MiB/6007msec) 00:25:38.127 slat (usec): min=2, max=7052, avg=53.82, stdev=265.68 00:25:38.127 clat (usec): min=1838, max=15539, avg=7942.18, stdev=1279.98 00:25:38.127 lat (usec): min=1849, max=15550, avg=7996.00, stdev=1293.93 00:25:38.127 clat percentiles (usec): 00:25:38.127 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7111], 00:25:38.127 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8094], 00:25:38.127 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10290], 00:25:38.127 | 99.00th=[11863], 99.50th=[12387], 99.90th=[13829], 99.95th=[14222], 00:25:38.127 | 99.99th=[14746] 00:25:38.127 bw ( KiB/s): min= 9184, max=28848, per=52.68%, avg=22778.00, stdev=5430.38, samples=12 00:25:38.127 iops : min= 2296, max= 7212, avg=5694.50, stdev=1357.59, samples=12 00:25:38.127 write: IOPS=6307, BW=24.6MiB/s (25.8MB/s)(134MiB/5424msec); 0 zone resets 00:25:38.127 slat (usec): min=3, max=2639, avg=63.62, stdev=149.25 00:25:38.127 clat (usec): min=2140, max=14263, avg=7193.51, stdev=984.38 00:25:38.127 lat (usec): min=2171, max=14288, avg=7257.13, stdev=988.31 00:25:38.127 clat percentiles (usec): 00:25:38.127 | 1.00th=[ 4015], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 6652], 00:25:38.127 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7373], 00:25:38.127 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8455], 00:25:38.127 | 99.00th=[10552], 99.50th=[11207], 99.90th=[12518], 99.95th=[13173], 00:25:38.127 | 99.99th=[13698] 00:25:38.127 bw ( KiB/s): min= 9584, max=28664, per=90.23%, avg=22764.67, stdev=5277.40, samples=12 00:25:38.127 iops : min= 2396, max= 7166, avg=5691.17, stdev=1319.35, samples=12 00:25:38.127 lat (msec) : 2=0.01%, 4=0.45%, 10=94.75%, 20=4.79% 00:25:38.127 cpu : usr=5.48%, sys=22.31%, ctx=7350, majf=0, minf=90 00:25:38.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:25:38.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:38.127 issued rwts: total=64935,34210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:38.127 00:25:38.127 Run status group 0 (all jobs): 00:25:38.127 READ: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=254MiB (266MB), run=6007-6007msec 00:25:38.127 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=134MiB (140MB), run=5424-5424msec 00:25:38.127 00:25:38.127 Disk stats (read/write): 00:25:38.128 nvme0n1: ios=64002/33559, merge=0/0, ticks=476600/230006, in_queue=706606, util=98.65% 00:25:38.128 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:38.386 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:25:38.644 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:25:38.644 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:25:38.645 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:39.580 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:39.580 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:39.580 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:39.580 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:25:39.580 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=106407 00:25:39.580 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:39.580 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:25:39.580 [global] 00:25:39.580 thread=1 00:25:39.580 invalidate=1 00:25:39.580 rw=randrw 00:25:39.580 time_based=1 00:25:39.580 runtime=6 00:25:39.580 ioengine=libaio 00:25:39.580 direct=1 00:25:39.580 bs=4096 00:25:39.580 iodepth=128 00:25:39.580 norandommap=0 00:25:39.580 numjobs=1 00:25:39.580 00:25:39.580 verify_dump=1 00:25:39.580 verify_backlog=512 00:25:39.580 verify_state_save=0 00:25:39.580 do_verify=1 00:25:39.580 verify=crc32c-intel 00:25:39.580 [job0] 00:25:39.580 filename=/dev/nvme0n1 00:25:39.838 Could not set queue depth (nvme0n1) 00:25:39.838 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:39.838 fio-3.35 00:25:39.838 Starting 1 thread 00:25:40.770 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:41.029 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:41.287 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:42.221 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:42.221 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:42.221 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:42.221 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:42.480 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:42.739 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:43.673 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:43.673 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:43.673 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:43.673 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 106407 00:25:46.200 00:25:46.200 job0: (groupid=0, jobs=1): err= 0: pid=106428: Mon Dec 16 11:45:51 2024 00:25:46.200 read: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(276MiB/6007msec) 00:25:46.200 slat (usec): min=4, max=6016, avg=42.69, stdev=226.12 00:25:46.200 clat (usec): min=374, max=56178, avg=7307.87, stdev=2221.66 00:25:46.200 lat (usec): min=398, max=56188, avg=7350.56, stdev=2235.27 00:25:46.200 clat percentiles (usec): 00:25:46.200 | 1.00th=[ 3490], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5800], 00:25:46.200 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7635], 00:25:46.200 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[10159], 00:25:46.200 | 99.00th=[11994], 99.50th=[13042], 99.90th=[47973], 99.95th=[52167], 00:25:46.200 | 99.99th=[55837] 00:25:46.200 bw ( KiB/s): min=11872, max=43912, per=53.35%, avg=25141.82, stdev=9153.76, samples=11 00:25:46.200 iops : min= 2968, max=10978, avg=6285.45, stdev=2288.44, samples=11 00:25:46.200 write: IOPS=6983, BW=27.3MiB/s (28.6MB/s)(147MiB/5399msec); 0 zone resets 00:25:46.200 slat (usec): min=15, max=3088, avg=54.17, stdev=126.20 00:25:46.200 clat (usec): min=643, max=55055, avg=6503.86, stdev=2377.95 00:25:46.200 lat (usec): min=671, max=55080, avg=6558.03, stdev=2387.53 00:25:46.200 clat percentiles (usec): 00:25:46.200 | 1.00th=[ 2868], 5.00th=[ 3687], 10.00th=[ 4113], 20.00th=[ 4752], 00:25:46.200 | 30.00th=[ 5735], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7111], 00:25:46.200 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8291], 00:25:46.200 | 99.00th=[10683], 99.50th=[12256], 99.90th=[49546], 99.95th=[53216], 00:25:46.200 | 99.99th=[53740] 00:25:46.200 bw ( KiB/s): min=12304, max=43104, per=89.95%, avg=25128.00, stdev=8893.03, samples=11 00:25:46.200 iops : min= 3076, max=10776, avg=6282.00, stdev=2223.26, samples=11 00:25:46.200 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:25:46.200 lat (msec) : 2=0.13%, 4=4.15%, 10=91.48%, 20=4.06%, 50=0.07% 00:25:46.200 lat (msec) : 100=0.08% 00:25:46.200 cpu : usr=5.63%, sys=24.83%, ctx=8886, majf=0, minf=90 00:25:46.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:25:46.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:46.200 issued rwts: total=70768,37704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:46.200 00:25:46.200 Run status group 0 (all jobs): 00:25:46.200 READ: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=276MiB (290MB), run=6007-6007msec 00:25:46.200 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=147MiB (154MB), run=5399-5399msec 00:25:46.200 00:25:46.200 Disk stats (read/write): 00:25:46.200 nvme0n1: ios=69823/37038, merge=0/0, ticks=472642/223198, in_queue=695840, util=98.65% 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:46.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:25:46.200 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.458 rmmod nvme_tcp 00:25:46.458 rmmod nvme_fabrics 00:25:46.458 rmmod nvme_keyring 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 106113 ']' 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 106113 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 106113 ']' 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 106113 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106113 00:25:46.458 killing process with pid 106113 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106113' 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 106113 00:25:46.458 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 106113 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:46.716 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:25:46.974 00:25:46.974 real 0m21.162s 00:25:46.974 user 1m11.621s 00:25:46.974 sys 0m8.486s 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.974 ************************************ 00:25:46.974 END TEST nvmf_target_multipath 00:25:46.974 ************************************ 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:46.974 ************************************ 00:25:46.974 START TEST nvmf_zcopy 00:25:46.974 ************************************ 00:25:46.974 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:25:47.234 * Looking for test storage... 00:25:47.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.234 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:47.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.234 --rc genhtml_branch_coverage=1 00:25:47.234 --rc genhtml_function_coverage=1 00:25:47.234 --rc genhtml_legend=1 00:25:47.234 --rc geninfo_all_blocks=1 00:25:47.235 --rc geninfo_unexecuted_blocks=1 00:25:47.235 00:25:47.235 ' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.235 --rc genhtml_branch_coverage=1 00:25:47.235 --rc genhtml_function_coverage=1 00:25:47.235 --rc genhtml_legend=1 00:25:47.235 --rc geninfo_all_blocks=1 00:25:47.235 --rc geninfo_unexecuted_blocks=1 00:25:47.235 00:25:47.235 ' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.235 --rc genhtml_branch_coverage=1 00:25:47.235 --rc genhtml_function_coverage=1 00:25:47.235 --rc genhtml_legend=1 00:25:47.235 --rc geninfo_all_blocks=1 00:25:47.235 --rc geninfo_unexecuted_blocks=1 00:25:47.235 00:25:47.235 ' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.235 --rc genhtml_branch_coverage=1 00:25:47.235 --rc genhtml_function_coverage=1 00:25:47.235 --rc genhtml_legend=1 00:25:47.235 --rc geninfo_all_blocks=1 00:25:47.235 --rc geninfo_unexecuted_blocks=1 00:25:47.235 00:25:47.235 ' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:47.235 Cannot find device "nvmf_init_br" 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:47.235 Cannot find device "nvmf_init_br2" 00:25:47.235 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:47.236 Cannot find device "nvmf_tgt_br" 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:47.236 Cannot find device "nvmf_tgt_br2" 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:47.236 Cannot find device "nvmf_init_br" 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:47.236 Cannot find device "nvmf_init_br2" 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:25:47.236 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:47.494 Cannot find device "nvmf_tgt_br" 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:47.494 Cannot find device "nvmf_tgt_br2" 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:47.494 Cannot find device "nvmf_br" 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:47.494 Cannot find device "nvmf_init_if" 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:47.494 Cannot find device "nvmf_init_if2" 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:47.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:47.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:47.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:47.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:25:47.494 00:25:47.494 --- 10.0.0.3 ping statistics --- 00:25:47.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.494 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:47.494 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:47.494 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:25:47.494 00:25:47.494 --- 10.0.0.4 ping statistics --- 00:25:47.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.494 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:47.494 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:47.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:25:47.753 00:25:47.753 --- 10.0.0.1 ping statistics --- 00:25:47.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.753 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:47.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:25:47.753 00:25:47.753 --- 10.0.0.2 ping statistics --- 00:25:47.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.753 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=106786 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 106786 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 106786 ']' 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.753 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:47.753 [2024-12-16 11:45:53.011156] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:47.753 [2024-12-16 11:45:53.012532] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:47.753 [2024-12-16 11:45:53.012617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.753 [2024-12-16 11:45:53.162326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.012 [2024-12-16 11:45:53.221475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.012 [2024-12-16 11:45:53.221547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.012 [2024-12-16 11:45:53.221558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.012 [2024-12-16 11:45:53.221567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.012 [2024-12-16 11:45:53.221574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.012 [2024-12-16 11:45:53.222035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.012 [2024-12-16 11:45:53.357994] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:48.012 [2024-12-16 11:45:53.358353] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:48.012 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.012 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:25:48.012 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:48.012 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:48.012 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:48.271 [2024-12-16 11:45:53.446885] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:48.271 [2024-12-16 11:45:53.475284] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:48.271 malloc0 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:48.271 { 00:25:48.271 "params": { 00:25:48.271 "name": "Nvme$subsystem", 00:25:48.271 "trtype": "$TEST_TRANSPORT", 00:25:48.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.271 "adrfam": "ipv4", 00:25:48.271 "trsvcid": "$NVMF_PORT", 00:25:48.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.271 "hdgst": ${hdgst:-false}, 00:25:48.271 "ddgst": ${ddgst:-false} 00:25:48.271 }, 00:25:48.271 "method": "bdev_nvme_attach_controller" 00:25:48.271 } 00:25:48.271 EOF 00:25:48.271 )") 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:25:48.271 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:48.271 "params": { 00:25:48.271 "name": "Nvme1", 00:25:48.271 "trtype": "tcp", 00:25:48.271 "traddr": "10.0.0.3", 00:25:48.271 "adrfam": "ipv4", 00:25:48.271 "trsvcid": "4420", 00:25:48.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.271 "hdgst": false, 00:25:48.271 "ddgst": false 00:25:48.271 }, 00:25:48.271 "method": "bdev_nvme_attach_controller" 00:25:48.271 }' 00:25:48.271 [2024-12-16 11:45:53.588559] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:48.271 [2024-12-16 11:45:53.588670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106818 ] 00:25:48.530 [2024-12-16 11:45:53.734613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.530 [2024-12-16 11:45:53.797926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.788 Running I/O for 10 seconds... 00:25:50.687 5617.00 IOPS, 43.88 MiB/s [2024-12-16T11:45:57.035Z] 5616.00 IOPS, 43.88 MiB/s [2024-12-16T11:45:58.411Z] 5631.33 IOPS, 43.99 MiB/s [2024-12-16T11:45:59.346Z] 5623.50 IOPS, 43.93 MiB/s [2024-12-16T11:46:00.281Z] 5628.20 IOPS, 43.97 MiB/s [2024-12-16T11:46:01.218Z] 5627.67 IOPS, 43.97 MiB/s [2024-12-16T11:46:02.153Z] 5633.43 IOPS, 44.01 MiB/s [2024-12-16T11:46:03.089Z] 5630.62 IOPS, 43.99 MiB/s [2024-12-16T11:46:04.025Z] 5642.44 IOPS, 44.08 MiB/s [2024-12-16T11:46:04.025Z] 5639.60 IOPS, 44.06 MiB/s 00:25:58.611 Latency(us) 00:25:58.611 [2024-12-16T11:46:04.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.611 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:25:58.611 Verification LBA range: start 0x0 length 0x1000 00:25:58.611 Nvme1n1 : 10.02 5640.92 44.07 0.00 0.00 22619.68 2189.50 30742.34 00:25:58.611 [2024-12-16T11:46:04.025Z] =================================================================================================================== 00:25:58.611 [2024-12-16T11:46:04.025Z] Total : 5640.92 44.07 0.00 0.00 22619.68 2189.50 30742.34 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=106936 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.870 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.870 { 00:25:58.870 "params": { 00:25:58.870 "name": "Nvme$subsystem", 00:25:58.870 "trtype": "$TEST_TRANSPORT", 00:25:58.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.870 "adrfam": "ipv4", 00:25:58.870 "trsvcid": "$NVMF_PORT", 00:25:58.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.871 "hdgst": ${hdgst:-false}, 00:25:58.871 "ddgst": ${ddgst:-false} 00:25:58.871 }, 00:25:58.871 "method": "bdev_nvme_attach_controller" 00:25:58.871 } 00:25:58.871 EOF 00:25:58.871 )") 00:25:58.871 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:25:58.871 [2024-12-16 11:46:04.210616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.210690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:25:58.871 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:25:58.871 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:58.871 "params": { 00:25:58.871 "name": "Nvme1", 00:25:58.871 "trtype": "tcp", 00:25:58.871 "traddr": "10.0.0.3", 00:25:58.871 "adrfam": "ipv4", 00:25:58.871 "trsvcid": "4420", 00:25:58.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.871 "hdgst": false, 00:25:58.871 "ddgst": false 00:25:58.871 }, 00:25:58.871 "method": "bdev_nvme_attach_controller" 00:25:58.871 }' 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:58.871 [2024-12-16 11:46:04.222606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.222634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:58.871 [2024-12-16 11:46:04.234569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.234596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:58.871 [2024-12-16 11:46:04.242562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.242588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:58.871 [2024-12-16 11:46:04.250591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.250616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 [2024-12-16 11:46:04.251912] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:25:58.871 [2024-12-16 11:46:04.252032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106936 ] 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:58.871 [2024-12-16 11:46:04.258587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.258611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:58.871 [2024-12-16 11:46:04.266559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.266584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:58.871 [2024-12-16 11:46:04.278610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:58.871 [2024-12-16 11:46:04.278637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:58.871 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.290564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.290588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.302587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.302611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.314586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.314619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.326724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.326808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.338671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.338737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.350646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.350710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.362622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.362672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.374578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.374604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.386576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.386601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.396433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.131 [2024-12-16 11:46:04.398632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.398658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.131 [2024-12-16 11:46:04.410578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.131 [2024-12-16 11:46:04.410604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.131 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.422618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.422643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.434598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.434628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.446588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.446614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.458640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.458699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.469997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.132 [2024-12-16 11:46:04.470583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.470608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.482579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.482604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.494566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.494595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.506578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.506605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.518590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.518618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.530598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.530635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.132 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.132 [2024-12-16 11:46:04.542589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.132 [2024-12-16 11:46:04.542617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.554611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.554658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.566590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.566623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.578581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.578608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.590617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.590653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.602591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.602639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.614573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.614603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.626570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.626599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.638620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.638651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.650632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.650701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 Running I/O for 5 seconds... 00:25:59.391 [2024-12-16 11:46:04.668994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.669064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.679689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.679727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.391 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.391 [2024-12-16 11:46:04.694770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.391 [2024-12-16 11:46:04.694810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.392 [2024-12-16 11:46:04.704462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.392 [2024-12-16 11:46:04.704494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.392 [2024-12-16 11:46:04.719726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.392 [2024-12-16 11:46:04.719791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.392 [2024-12-16 11:46:04.736759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.392 [2024-12-16 11:46:04.736805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.392 [2024-12-16 11:46:04.748112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.392 [2024-12-16 11:46:04.748166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.392 [2024-12-16 11:46:04.763873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.392 [2024-12-16 11:46:04.763913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.392 [2024-12-16 11:46:04.780748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.392 [2024-12-16 11:46:04.780824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.392 [2024-12-16 11:46:04.791822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.392 [2024-12-16 11:46:04.791865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.392 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.808101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.808170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.820217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.820252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.835488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.835571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.854833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.854911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.865439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.865484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.881433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.881507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.893111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.893159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.904427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.904471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.920867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.920937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.931945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.931995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.947401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.947447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.968184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.968255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.982266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.982334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:04.992744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:04.992780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.651 [2024-12-16 11:46:05.007860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.651 [2024-12-16 11:46:05.007905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.651 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.652 [2024-12-16 11:46:05.026714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.652 [2024-12-16 11:46:05.026787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.652 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.652 [2024-12-16 11:46:05.037638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.652 [2024-12-16 11:46:05.037692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.652 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.652 [2024-12-16 11:46:05.052075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.652 [2024-12-16 11:46:05.052147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.652 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.068310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.068369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.085644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.085731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.096133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.096168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.111843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.111891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.129618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.129694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.140276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.140311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.155675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.155733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.175015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.175100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.195981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.196047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.213740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.213795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.224203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.224238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.239736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.239786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.258615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.259878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.271041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.271079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.288665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.288726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.304749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.304802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:59.910 [2024-12-16 11:46:05.319138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:59.910 [2024-12-16 11:46:05.319185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:59.910 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.168 [2024-12-16 11:46:05.339195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.168 [2024-12-16 11:46:05.339247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.168 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.168 [2024-12-16 11:46:05.359377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.168 [2024-12-16 11:46:05.359428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.168 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.168 [2024-12-16 11:46:05.377531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.168 [2024-12-16 11:46:05.377576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.168 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.168 [2024-12-16 11:46:05.388081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.168 [2024-12-16 11:46:05.388120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.168 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.168 [2024-12-16 11:46:05.403078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.168 [2024-12-16 11:46:05.403122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.168 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.168 [2024-12-16 11:46:05.423576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.168 [2024-12-16 11:46:05.423631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.440746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.440797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.456498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.456549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.472731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.472772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.488170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.488215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.505138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.505183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.520246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.520292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.539299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.539348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.557662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.557738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.169 [2024-12-16 11:46:05.568036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.169 [2024-12-16 11:46:05.568090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.169 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.584446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.584492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.600038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.600084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.618897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.619109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.629578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.629621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.643646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.643689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 10727.00 IOPS, 83.80 MiB/s [2024-12-16T11:46:05.842Z] [2024-12-16 11:46:05.663001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.663070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.681249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.681297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.693731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.693777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.704175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.704219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.719896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.719946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.737847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.737896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.747992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.748046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.763287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.763335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.782788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.782840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.793647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.793692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.805917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.805961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.428 [2024-12-16 11:46:05.817186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.428 [2024-12-16 11:46:05.817226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.428 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.429 [2024-12-16 11:46:05.830515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.429 [2024-12-16 11:46:05.830557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.429 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.429 [2024-12-16 11:46:05.840632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.429 [2024-12-16 11:46:05.840684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.687 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.687 [2024-12-16 11:46:05.856602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.856645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.872953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.872996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.887969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.888033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.907581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.907635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.925547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.925592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.936004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.936064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.952678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.952724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.968164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.968209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.986665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.986720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:05.996773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:05.996816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:06.012792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:06.012835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:06.029155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:06.029196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:06.041934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:06.041974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:06.052200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:06.052238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:06.066883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:06.066926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:06.077564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:06.077604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.688 [2024-12-16 11:46:06.093106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.688 [2024-12-16 11:46:06.093151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.688 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.108095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.108137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.125168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.125212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.140129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.140174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.156584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.156628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.172954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.172999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.188255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.188302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.205266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.205316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.219910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.219953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.238469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.238522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.248531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.248570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.264583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.264627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.281277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.281321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.291647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.291686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.307856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.307899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.326621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.326666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.337182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.337223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:00.947 [2024-12-16 11:46:06.351027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:00.947 [2024-12-16 11:46:06.351067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:00.947 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.206 [2024-12-16 11:46:06.371378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.206 [2024-12-16 11:46:06.371424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.206 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.206 [2024-12-16 11:46:06.389467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.206 [2024-12-16 11:46:06.389512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.206 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.206 [2024-12-16 11:46:06.399674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.206 [2024-12-16 11:46:06.399714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.206 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.206 [2024-12-16 11:46:06.415378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.206 [2024-12-16 11:46:06.415422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.206 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.206 [2024-12-16 11:46:06.434677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.206 [2024-12-16 11:46:06.434730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.206 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.206 [2024-12-16 11:46:06.444662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.206 [2024-12-16 11:46:06.444709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.206 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.206 [2024-12-16 11:46:06.459873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.459922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.478546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.478594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.488555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.488596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.504946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.504992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.515522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.515563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.531702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.531753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.550353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.550404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.571742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.571791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.587045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.587089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.605930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.605981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.207 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.207 [2024-12-16 11:46:06.616672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.207 [2024-12-16 11:46:06.616716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.632145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.632191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.650377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.650423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.660411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.660446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 10939.50 IOPS, 85.46 MiB/s [2024-12-16T11:46:06.880Z] 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.676137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.676182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.694461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.694517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.704347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.704386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.721018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.721062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.735151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.735195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.755721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.755773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.772698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.772748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.788933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.788980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.805094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.805143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.819789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.466 [2024-12-16 11:46:06.819837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.466 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.466 [2024-12-16 11:46:06.838319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.467 [2024-12-16 11:46:06.838374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.467 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.467 [2024-12-16 11:46:06.860089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.467 [2024-12-16 11:46:06.860149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.467 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.467 [2024-12-16 11:46:06.876513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.467 [2024-12-16 11:46:06.876573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:06.893034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:06.893076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:06.907976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:06.908034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:06.927028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:06.927081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:06.944517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:06.944558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:06.960993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:06.961047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:06.976087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:06.976129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:06.992850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:06.992892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.008153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.008197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.024871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.024916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.040145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.040184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.056910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.056957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.069995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.070057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.080431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.080469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.095704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.095748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.112965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.113030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.726 [2024-12-16 11:46:07.126855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.726 [2024-12-16 11:46:07.126901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.726 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.147609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.147660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.163560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.163605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.182167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.182219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.203219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.203270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.223905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.223950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.238881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.238920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.258998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.259052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.278551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.278611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.985 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.985 [2024-12-16 11:46:07.288994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.985 [2024-12-16 11:46:07.289045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.986 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.986 [2024-12-16 11:46:07.303277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.986 [2024-12-16 11:46:07.303325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.986 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.986 [2024-12-16 11:46:07.323336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.986 [2024-12-16 11:46:07.323397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.986 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.986 [2024-12-16 11:46:07.343172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.986 [2024-12-16 11:46:07.343225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.986 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.986 [2024-12-16 11:46:07.362826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.986 [2024-12-16 11:46:07.362882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.986 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.986 [2024-12-16 11:46:07.373715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.986 [2024-12-16 11:46:07.373769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.986 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:01.986 [2024-12-16 11:46:07.387870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:01.986 [2024-12-16 11:46:07.387931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:01.986 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.405058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.405124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.420038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.420093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.438903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.438966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.449658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.449707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.463991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.464048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.480923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.480965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.496926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.496968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.511803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.511842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.530182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.530220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.551826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.551873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.568004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.568062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.586645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.586689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.597168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.597207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.611243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.611283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.631803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.631859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.245 [2024-12-16 11:46:07.647761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.245 [2024-12-16 11:46:07.647806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.245 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 11010.67 IOPS, 86.02 MiB/s [2024-12-16T11:46:07.919Z] [2024-12-16 11:46:07.666849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.666897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.505 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 [2024-12-16 11:46:07.677790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.677827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.505 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 [2024-12-16 11:46:07.692302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.692343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.505 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 [2024-12-16 11:46:07.708547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.708587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.505 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 [2024-12-16 11:46:07.724224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.724264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.505 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 [2024-12-16 11:46:07.740295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.740340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.505 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 [2024-12-16 11:46:07.756631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.756675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.505 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.505 [2024-12-16 11:46:07.773086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.505 [2024-12-16 11:46:07.773126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.788337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.788376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.804711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.804757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.820913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.820963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.837336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.837380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.850975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.851028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.872226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.872279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.887349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.887392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.906653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.906701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.506 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.506 [2024-12-16 11:46:07.917259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.506 [2024-12-16 11:46:07.917300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:07.931516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:07.931558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:07.950520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:07.950569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:07.960667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:07.960718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:07.976944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:07.976988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:07.992295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:07.992338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:08.008963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:08.009020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:08.025101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:08.025144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:08.039898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:08.039936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:08.058566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:08.058606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.765 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.765 [2024-12-16 11:46:08.069286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.765 [2024-12-16 11:46:08.069329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.766 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.766 [2024-12-16 11:46:08.084689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.766 [2024-12-16 11:46:08.084736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.766 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.766 [2024-12-16 11:46:08.100054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.766 [2024-12-16 11:46:08.100099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.766 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.766 [2024-12-16 11:46:08.118549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.766 [2024-12-16 11:46:08.118595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.766 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.766 [2024-12-16 11:46:08.128805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.766 [2024-12-16 11:46:08.128845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.766 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.766 [2024-12-16 11:46:08.144256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.766 [2024-12-16 11:46:08.144299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.766 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:02.766 [2024-12-16 11:46:08.159071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:02.766 [2024-12-16 11:46:08.159114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:02.766 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.024 [2024-12-16 11:46:08.179553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.024 [2024-12-16 11:46:08.179600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.024 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.024 [2024-12-16 11:46:08.196863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.024 [2024-12-16 11:46:08.196921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.024 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.024 [2024-12-16 11:46:08.212968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.024 [2024-12-16 11:46:08.213028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.024 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.024 [2024-12-16 11:46:08.229256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.024 [2024-12-16 11:46:08.229305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.024 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.024 [2024-12-16 11:46:08.242745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.024 [2024-12-16 11:46:08.242786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.262744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.262789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.273382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.273422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.289053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.289095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.304985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.305042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.320279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.320328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.336757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.336802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.352115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.352158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.370518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.370562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.381735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.381779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.395374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.395413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.415774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.415822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.025 [2024-12-16 11:46:08.431735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.025 [2024-12-16 11:46:08.431783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.025 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.450302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.450350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.471860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.471911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.488461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.488509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.504830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.504875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.520500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.520544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.538322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.538365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.559263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.559302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.577975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.578027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.588086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.588122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.604250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.604294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.620592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.620633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.636869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.636909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.651964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.652017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 11051.75 IOPS, 86.34 MiB/s [2024-12-16T11:46:08.698Z] [2024-12-16 11:46:08.670518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.670559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.680685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.680727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.284 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.284 [2024-12-16 11:46:08.695672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.284 [2024-12-16 11:46:08.695714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.714565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.714612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.724845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.724886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.741101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.741142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.756176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.756219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.775172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.775215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.795036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.795083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.815509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.815563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.832768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.832820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.849330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.849374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.543 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.543 [2024-12-16 11:46:08.859705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.543 [2024-12-16 11:46:08.859745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.544 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.544 [2024-12-16 11:46:08.876095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.544 [2024-12-16 11:46:08.876139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.544 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.544 [2024-12-16 11:46:08.892903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.544 [2024-12-16 11:46:08.892949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.544 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.544 [2024-12-16 11:46:08.907861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.544 [2024-12-16 11:46:08.907903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.544 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.544 [2024-12-16 11:46:08.927172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.544 [2024-12-16 11:46:08.927214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.544 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.544 [2024-12-16 11:46:08.945434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.544 [2024-12-16 11:46:08.945475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.544 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.544 [2024-12-16 11:46:08.955461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.544 [2024-12-16 11:46:08.955506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:08.970685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:08.970727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:08.980560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:08.980598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:08.996363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:08.996403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.014778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.014822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.025606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.025644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.040594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.040637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.058639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.058677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.068714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.068752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.083335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.083371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.103087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.103127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.122198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.122236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.143148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.143188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.162855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.162896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.803 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.803 [2024-12-16 11:46:09.173373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.803 [2024-12-16 11:46:09.173412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.804 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.804 [2024-12-16 11:46:09.189197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.804 [2024-12-16 11:46:09.189235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.804 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.804 [2024-12-16 11:46:09.199539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.804 [2024-12-16 11:46:09.199576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:03.804 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:03.804 [2024-12-16 11:46:09.216286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:03.804 [2024-12-16 11:46:09.216327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.234258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.234301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.255659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.255711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.272533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.272572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.290910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.290949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.301596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.301636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.317206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.317244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.332584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.332636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.351060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.351099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.369572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.369612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.379946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.379984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.395103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.395140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.415303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.415345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.435570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.435613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.454823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.454862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.063 [2024-12-16 11:46:09.466377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.063 [2024-12-16 11:46:09.466445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.063 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.477933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.477970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.489523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.489576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.508072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.508135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.525081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.525135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.540776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.540829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.556846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.556893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.571746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.571787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.590278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.590332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.611853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.611893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.625223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.625277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.635561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.635599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.652375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.652439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 11070.40 IOPS, 86.49 MiB/s [2024-12-16T11:46:09.737Z] [2024-12-16 11:46:09.667093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.667131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 00:26:04.323 Latency(us) 00:26:04.323 [2024-12-16T11:46:09.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.323 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:26:04.323 Nvme1n1 : 5.01 11071.21 86.49 0.00 0.00 11545.35 2889.54 19303.33 00:26:04.323 [2024-12-16T11:46:09.737Z] =================================================================================================================== 00:26:04.323 [2024-12-16T11:46:09.737Z] Total : 11071.21 86.49 0.00 0.00 11545.35 2889.54 19303.33 00:26:04.323 [2024-12-16 11:46:09.678650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.678700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.690610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.690660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.702640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.702692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.714663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.714715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.323 [2024-12-16 11:46:09.726661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.323 [2024-12-16 11:46:09.726726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.323 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.582 [2024-12-16 11:46:09.738639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.738693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.750660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.750717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.762671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.762729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.774661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.774721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.786648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.786715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.798653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.798711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.810679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.810754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.822618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.822668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.834575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.834623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.846638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.846691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.858703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.858764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.870644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.870693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.882629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.882674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 [2024-12-16 11:46:09.894604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:04.583 [2024-12-16 11:46:09.894646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:04.583 2024/12/16 11:46:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:04.583 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (106936) - No such process 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 106936 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:04.583 delay0 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.583 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:26:04.842 [2024-12-16 11:46:10.095530] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:12.957 Initializing NVMe Controllers 00:26:12.957 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.957 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.957 Initialization complete. Launching workers. 00:26:12.957 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 258, failed: 17855 00:26:12.957 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18019, failed to submit 94 00:26:12.957 success 17939, unsuccessful 80, failed 0 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.957 rmmod nvme_tcp 00:26:12.957 rmmod nvme_fabrics 00:26:12.957 rmmod nvme_keyring 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 106786 ']' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 106786 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 106786 ']' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 106786 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106786 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.957 killing process with pid 106786 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106786' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 106786 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 106786 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:26:12.957 00:26:12.957 real 0m25.385s 00:26:12.957 user 0m38.856s 00:26:12.957 sys 0m8.563s 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:12.957 ************************************ 00:26:12.957 END TEST nvmf_zcopy 00:26:12.957 ************************************ 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:12.957 ************************************ 00:26:12.957 START TEST nvmf_nmic 00:26:12.957 ************************************ 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:12.957 * Looking for test storage... 00:26:12.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:12.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.958 --rc genhtml_branch_coverage=1 00:26:12.958 --rc genhtml_function_coverage=1 00:26:12.958 --rc genhtml_legend=1 00:26:12.958 --rc geninfo_all_blocks=1 00:26:12.958 --rc geninfo_unexecuted_blocks=1 00:26:12.958 00:26:12.958 ' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.958 --rc genhtml_branch_coverage=1 00:26:12.958 --rc genhtml_function_coverage=1 00:26:12.958 --rc genhtml_legend=1 00:26:12.958 --rc geninfo_all_blocks=1 00:26:12.958 --rc geninfo_unexecuted_blocks=1 00:26:12.958 00:26:12.958 ' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.958 --rc genhtml_branch_coverage=1 00:26:12.958 --rc genhtml_function_coverage=1 00:26:12.958 --rc genhtml_legend=1 00:26:12.958 --rc geninfo_all_blocks=1 00:26:12.958 --rc geninfo_unexecuted_blocks=1 00:26:12.958 00:26:12.958 ' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.958 --rc genhtml_branch_coverage=1 00:26:12.958 --rc genhtml_function_coverage=1 00:26:12.958 --rc genhtml_legend=1 00:26:12.958 --rc geninfo_all_blocks=1 00:26:12.958 --rc geninfo_unexecuted_blocks=1 00:26:12.958 00:26:12.958 ' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:26:12.958 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.958 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.958 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:12.959 Cannot find device "nvmf_init_br" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:12.959 Cannot find device "nvmf_init_br2" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:12.959 Cannot find device "nvmf_tgt_br" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.959 Cannot find device "nvmf_tgt_br2" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:12.959 Cannot find device "nvmf_init_br" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:12.959 Cannot find device "nvmf_init_br2" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:12.959 Cannot find device "nvmf_tgt_br" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:12.959 Cannot find device "nvmf_tgt_br2" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:12.959 Cannot find device "nvmf_br" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:12.959 Cannot find device "nvmf_init_if" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:12.959 Cannot find device "nvmf_init_if2" 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:12.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:12.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:12.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:12.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:26:12.959 00:26:12.959 --- 10.0.0.3 ping statistics --- 00:26:12.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.959 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:12.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:12.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:26:12.959 00:26:12.959 --- 10.0.0.4 ping statistics --- 00:26:12.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.959 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:26:12.959 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:13.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:26:13.219 00:26:13.219 --- 10.0.0.1 ping statistics --- 00:26:13.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.219 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:13.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:26:13.219 00:26:13.219 --- 10.0.0.2 ping statistics --- 00:26:13.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.219 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=107315 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 107315 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 107315 ']' 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.219 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:13.219 [2024-12-16 11:46:18.466762] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:13.219 [2024-12-16 11:46:18.468182] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:26:13.219 [2024-12-16 11:46:18.468287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.219 [2024-12-16 11:46:18.616159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.478 [2024-12-16 11:46:18.692646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.478 [2024-12-16 11:46:18.692730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.478 [2024-12-16 11:46:18.692749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.478 [2024-12-16 11:46:18.692764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.478 [2024-12-16 11:46:18.692777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.478 [2024-12-16 11:46:18.695434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.478 [2024-12-16 11:46:18.695584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.478 [2024-12-16 11:46:18.696822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.478 [2024-12-16 11:46:18.696844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.478 [2024-12-16 11:46:18.804679] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:13.478 [2024-12-16 11:46:18.804921] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:13.478 [2024-12-16 11:46:18.805610] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:13.478 [2024-12-16 11:46:18.805931] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:13.478 [2024-12-16 11:46:18.807306] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 [2024-12-16 11:46:19.514249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 Malloc0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 [2024-12-16 11:46:19.590265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 test case1: single bdev can't be used in multiple subsystems 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 [2024-12-16 11:46:19.617926] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:26:14.417 [2024-12-16 11:46:19.617991] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:26:14.417 [2024-12-16 11:46:19.618026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.417 2024/12/16 11:46:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.417 request: 00:26:14.417 { 00:26:14.417 "method": "nvmf_subsystem_add_ns", 00:26:14.417 "params": { 00:26:14.417 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:26:14.417 "namespace": { 00:26:14.417 "bdev_name": "Malloc0", 00:26:14.417 "no_auto_visible": false, 00:26:14.417 "hide_metadata": false 00:26:14.417 } 00:26:14.417 } 00:26:14.417 } 00:26:14.417 Got JSON-RPC error response 00:26:14.417 GoRPCClient: error on JSON-RPC call 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:26:14.417 Adding namespace failed - expected result. 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:26:14.417 test case2: host connect to nvmf target in multiple paths 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:14.417 [2024-12-16 11:46:19.630193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:14.417 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.957 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.957 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.957 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:16.957 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.957 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.957 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:26:16.957 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:16.957 [global] 00:26:16.957 thread=1 00:26:16.957 invalidate=1 00:26:16.957 rw=write 00:26:16.957 time_based=1 00:26:16.957 runtime=1 00:26:16.957 ioengine=libaio 00:26:16.957 direct=1 00:26:16.957 bs=4096 00:26:16.957 iodepth=1 00:26:16.957 norandommap=0 00:26:16.957 numjobs=1 00:26:16.957 00:26:16.957 verify_dump=1 00:26:16.957 verify_backlog=512 00:26:16.957 verify_state_save=0 00:26:16.957 do_verify=1 00:26:16.957 verify=crc32c-intel 00:26:16.957 [job0] 00:26:16.957 filename=/dev/nvme0n1 00:26:16.957 Could not set queue depth (nvme0n1) 00:26:16.957 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:16.957 fio-3.35 00:26:16.957 Starting 1 thread 00:26:17.894 00:26:17.894 job0: (groupid=0, jobs=1): err= 0: pid=107419: Mon Dec 16 11:46:23 2024 00:26:17.894 read: IOPS=2591, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1000msec) 00:26:17.894 slat (nsec): min=13115, max=60033, avg=18002.88, stdev=6346.49 00:26:17.894 clat (usec): min=154, max=2866, avg=185.14, stdev=61.93 00:26:17.894 lat (usec): min=169, max=2888, avg=203.14, stdev=64.35 00:26:17.894 clat percentiles (usec): 00:26:17.894 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 165], 20.00th=[ 169], 00:26:17.894 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:26:17.894 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 237], 95.00th=[ 245], 00:26:17.894 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 482], 99.95th=[ 1074], 00:26:17.894 | 99.99th=[ 2868] 00:26:17.894 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:26:17.894 slat (nsec): min=18408, max=96484, avg=23282.34, stdev=5072.48 00:26:17.894 clat (usec): min=106, max=4910, avg=127.26, stdev=130.15 00:26:17.894 lat (usec): min=125, max=4945, avg=150.55, stdev=130.57 00:26:17.894 clat percentiles (usec): 00:26:17.894 | 1.00th=[ 112], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 116], 00:26:17.894 | 30.00th=[ 118], 40.00th=[ 119], 50.00th=[ 120], 60.00th=[ 122], 00:26:17.894 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 137], 00:26:17.894 | 99.00th=[ 151], 99.50th=[ 186], 99.90th=[ 2343], 99.95th=[ 3359], 00:26:17.894 | 99.99th=[ 4883] 00:26:17.894 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:26:17.894 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:26:17.894 lat (usec) : 250=98.02%, 500=1.80%, 1000=0.04% 00:26:17.894 lat (msec) : 2=0.05%, 4=0.07%, 10=0.02% 00:26:17.894 cpu : usr=2.10%, sys=9.00%, ctx=5670, majf=0, minf=5 00:26:17.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.894 issued rwts: total=2591,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:17.894 00:26:17.894 Run status group 0 (all jobs): 00:26:17.894 READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=10.1MiB (10.6MB), run=1000-1000msec 00:26:17.894 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1000-1000msec 00:26:17.894 00:26:17.894 Disk stats (read/write): 00:26:17.894 nvme0n1: ios=2502/2560, merge=0/0, ticks=477/342, in_queue=819, util=90.48% 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:17.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.894 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.894 rmmod nvme_tcp 00:26:17.894 rmmod nvme_fabrics 00:26:17.894 rmmod nvme_keyring 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 107315 ']' 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 107315 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 107315 ']' 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 107315 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107315 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107315' 00:26:18.154 killing process with pid 107315 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 107315 00:26:18.154 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 107315 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.413 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:26:18.673 ************************************ 00:26:18.673 END TEST nvmf_nmic 00:26:18.673 ************************************ 00:26:18.673 00:26:18.673 real 0m6.058s 00:26:18.673 user 0m14.941s 00:26:18.673 sys 0m2.170s 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:18.673 ************************************ 00:26:18.673 START TEST nvmf_fio_target 00:26:18.673 ************************************ 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:18.673 * Looking for test storage... 00:26:18.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:18.673 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:18.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.673 --rc genhtml_branch_coverage=1 00:26:18.673 --rc genhtml_function_coverage=1 00:26:18.673 --rc genhtml_legend=1 00:26:18.673 --rc geninfo_all_blocks=1 00:26:18.673 --rc geninfo_unexecuted_blocks=1 00:26:18.673 00:26:18.673 ' 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:18.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.673 --rc genhtml_branch_coverage=1 00:26:18.673 --rc genhtml_function_coverage=1 00:26:18.673 --rc genhtml_legend=1 00:26:18.673 --rc geninfo_all_blocks=1 00:26:18.673 --rc geninfo_unexecuted_blocks=1 00:26:18.673 00:26:18.673 ' 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:18.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.673 --rc genhtml_branch_coverage=1 00:26:18.673 --rc genhtml_function_coverage=1 00:26:18.673 --rc genhtml_legend=1 00:26:18.673 --rc geninfo_all_blocks=1 00:26:18.673 --rc geninfo_unexecuted_blocks=1 00:26:18.673 00:26:18.673 ' 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:18.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.673 --rc genhtml_branch_coverage=1 00:26:18.673 --rc genhtml_function_coverage=1 00:26:18.673 --rc genhtml_legend=1 00:26:18.673 --rc geninfo_all_blocks=1 00:26:18.673 --rc geninfo_unexecuted_blocks=1 00:26:18.673 00:26:18.673 ' 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.673 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.933 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:18.934 Cannot find device "nvmf_init_br" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:18.934 Cannot find device "nvmf_init_br2" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:18.934 Cannot find device "nvmf_tgt_br" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:18.934 Cannot find device "nvmf_tgt_br2" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:18.934 Cannot find device "nvmf_init_br" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:18.934 Cannot find device "nvmf_init_br2" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:18.934 Cannot find device "nvmf_tgt_br" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:18.934 Cannot find device "nvmf_tgt_br2" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:18.934 Cannot find device "nvmf_br" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:18.934 Cannot find device "nvmf_init_if" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:18.934 Cannot find device "nvmf_init_if2" 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:18.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:18.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:18.934 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:18.935 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:18.935 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:18.935 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:18.935 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:19.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:19.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:26:19.194 00:26:19.194 --- 10.0.0.3 ping statistics --- 00:26:19.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.194 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:19.194 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:19.194 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:26:19.194 00:26:19.194 --- 10.0.0.4 ping statistics --- 00:26:19.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.194 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:19.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:26:19.194 00:26:19.194 --- 10.0.0.1 ping statistics --- 00:26:19.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.194 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:19.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:26:19.194 00:26:19.194 --- 10.0.0.2 ping statistics --- 00:26:19.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.194 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:19.194 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=107657 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 107657 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 107657 ']' 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.195 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.195 [2024-12-16 11:46:24.570691] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:19.195 [2024-12-16 11:46:24.572107] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:26:19.195 [2024-12-16 11:46:24.572201] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.454 [2024-12-16 11:46:24.728259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.454 [2024-12-16 11:46:24.801281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.454 [2024-12-16 11:46:24.801666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.454 [2024-12-16 11:46:24.801911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.454 [2024-12-16 11:46:24.802144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.454 [2024-12-16 11:46:24.802268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.454 [2024-12-16 11:46:24.803701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.454 [2024-12-16 11:46:24.803781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.454 [2024-12-16 11:46:24.803983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.454 [2024-12-16 11:46:24.803872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.712 [2024-12-16 11:46:24.907767] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:19.712 [2024-12-16 11:46:24.908182] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:19.712 [2024-12-16 11:46:24.908965] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:19.712 [2024-12-16 11:46:24.909282] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:19.712 [2024-12-16 11:46:24.910529] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:20.278 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.278 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:26:20.278 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.278 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.278 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.278 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.278 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:20.537 [2024-12-16 11:46:25.880937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.537 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:21.145 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:26:21.145 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:21.404 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:26:21.404 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:21.663 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:26:21.663 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:21.921 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:26:21.921 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:26:22.180 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:22.438 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:26:22.438 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:22.696 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:26:22.696 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:22.955 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:26:22.955 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:26:23.214 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:23.475 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:23.475 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.733 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:23.733 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:23.991 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:24.250 [2024-12-16 11:46:29.624917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:24.250 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:26:24.508 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:26:25.076 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:25.076 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:26:25.076 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:26:25.076 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.076 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:26:25.076 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:26:25.076 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:26:27.015 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:27.015 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:27.015 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:27.015 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:26:27.015 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.015 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:26:27.015 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:27.015 [global] 00:26:27.015 thread=1 00:26:27.015 invalidate=1 00:26:27.015 rw=write 00:26:27.015 time_based=1 00:26:27.015 runtime=1 00:26:27.015 ioengine=libaio 00:26:27.015 direct=1 00:26:27.015 bs=4096 00:26:27.015 iodepth=1 00:26:27.015 norandommap=0 00:26:27.015 numjobs=1 00:26:27.015 00:26:27.015 verify_dump=1 00:26:27.015 verify_backlog=512 00:26:27.015 verify_state_save=0 00:26:27.015 do_verify=1 00:26:27.015 verify=crc32c-intel 00:26:27.015 [job0] 00:26:27.015 filename=/dev/nvme0n1 00:26:27.015 [job1] 00:26:27.015 filename=/dev/nvme0n2 00:26:27.015 [job2] 00:26:27.015 filename=/dev/nvme0n3 00:26:27.015 [job3] 00:26:27.015 filename=/dev/nvme0n4 00:26:27.015 Could not set queue depth (nvme0n1) 00:26:27.015 Could not set queue depth (nvme0n2) 00:26:27.015 Could not set queue depth (nvme0n3) 00:26:27.015 Could not set queue depth (nvme0n4) 00:26:27.274 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:27.274 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:27.274 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:27.274 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:27.274 fio-3.35 00:26:27.274 Starting 4 threads 00:26:28.650 00:26:28.650 job0: (groupid=0, jobs=1): err= 0: pid=107942: Mon Dec 16 11:46:33 2024 00:26:28.650 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:26:28.650 slat (nsec): min=12642, max=37088, avg=14153.07, stdev=2243.54 00:26:28.650 clat (usec): min=164, max=338, avg=189.06, stdev=12.20 00:26:28.650 lat (usec): min=177, max=352, avg=203.22, stdev=12.57 00:26:28.650 clat percentiles (usec): 00:26:28.650 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 180], 00:26:28.650 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:26:28.650 | 70.00th=[ 194], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 206], 00:26:28.650 | 99.00th=[ 229], 99.50th=[ 262], 99.90th=[ 285], 99.95th=[ 289], 00:26:28.650 | 99.99th=[ 338] 00:26:28.650 write: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:26:28.650 slat (nsec): min=17886, max=72460, avg=20376.72, stdev=3137.94 00:26:28.650 clat (usec): min=114, max=430, avg=137.83, stdev=11.09 00:26:28.650 lat (usec): min=133, max=450, avg=158.21, stdev=11.68 00:26:28.650 clat percentiles (usec): 00:26:28.650 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 131], 00:26:28.650 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:26:28.650 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:26:28.650 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 231], 99.95th=[ 265], 00:26:28.650 | 99.99th=[ 433] 00:26:28.650 bw ( KiB/s): min=12288, max=12288, per=31.92%, avg=12288.00, stdev= 0.00, samples=1 00:26:28.650 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:26:28.650 lat (usec) : 250=99.60%, 500=0.40% 00:26:28.650 cpu : usr=1.60%, sys=7.30%, ctx=5539, majf=0, minf=7 00:26:28.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.650 issued rwts: total=2560,2979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.650 job1: (groupid=0, jobs=1): err= 0: pid=107943: Mon Dec 16 11:46:33 2024 00:26:28.650 read: IOPS=1697, BW=6789KiB/s (6952kB/s)(6796KiB/1001msec) 00:26:28.650 slat (usec): min=11, max=110, avg=15.20, stdev= 3.40 00:26:28.650 clat (usec): min=161, max=659, avg=288.11, stdev=65.59 00:26:28.650 lat (usec): min=177, max=674, avg=303.31, stdev=65.01 00:26:28.650 clat percentiles (usec): 00:26:28.650 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 196], 00:26:28.650 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:26:28.650 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 355], 00:26:28.650 | 99.00th=[ 437], 99.50th=[ 441], 99.90th=[ 578], 99.95th=[ 660], 00:26:28.650 | 99.99th=[ 660] 00:26:28.650 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:26:28.650 slat (nsec): min=16419, max=92716, avg=22302.37, stdev=4801.27 00:26:28.650 clat (usec): min=121, max=458, avg=211.11, stdev=45.13 00:26:28.650 lat (usec): min=146, max=478, avg=233.41, stdev=43.12 00:26:28.650 clat percentiles (usec): 00:26:28.650 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 149], 00:26:28.650 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 233], 00:26:28.650 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:26:28.650 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 326], 00:26:28.650 | 99.99th=[ 457] 00:26:28.650 bw ( KiB/s): min= 8192, max= 8192, per=21.28%, avg=8192.00, stdev= 0.00, samples=1 00:26:28.650 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:28.650 lat (usec) : 250=58.45%, 500=41.47%, 750=0.08% 00:26:28.650 cpu : usr=1.70%, sys=5.10%, ctx=3749, majf=0, minf=9 00:26:28.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.650 issued rwts: total=1699,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.650 job2: (groupid=0, jobs=1): err= 0: pid=107944: Mon Dec 16 11:46:33 2024 00:26:28.650 read: IOPS=2187, BW=8751KiB/s (8961kB/s)(8760KiB/1001msec) 00:26:28.650 slat (nsec): min=13511, max=49846, avg=16821.22, stdev=4597.80 00:26:28.650 clat (usec): min=173, max=626, avg=209.01, stdev=45.95 00:26:28.650 lat (usec): min=188, max=657, avg=225.83, stdev=48.56 00:26:28.650 clat percentiles (usec): 00:26:28.650 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 00:26:28.650 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:26:28.651 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 322], 00:26:28.651 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 619], 99.95th=[ 619], 00:26:28.651 | 99.99th=[ 627] 00:26:28.651 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:26:28.651 slat (nsec): min=19408, max=83306, avg=25724.27, stdev=8595.23 00:26:28.651 clat (usec): min=123, max=1585, avg=168.20, stdev=65.24 00:26:28.651 lat (usec): min=144, max=1606, avg=193.93, stdev=70.97 00:26:28.651 clat percentiles (usec): 00:26:28.651 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:26:28.651 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:26:28.651 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 260], 95.00th=[ 322], 00:26:28.651 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 594], 99.95th=[ 1090], 00:26:28.651 | 99.99th=[ 1582] 00:26:28.651 bw ( KiB/s): min=12288, max=12288, per=31.92%, avg=12288.00, stdev= 0.00, samples=1 00:26:28.651 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:26:28.651 lat (usec) : 250=91.68%, 500=8.17%, 750=0.11% 00:26:28.651 lat (msec) : 2=0.04% 00:26:28.651 cpu : usr=1.90%, sys=7.70%, ctx=4750, majf=0, minf=13 00:26:28.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.651 issued rwts: total=2190,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.651 job3: (groupid=0, jobs=1): err= 0: pid=107945: Mon Dec 16 11:46:33 2024 00:26:28.651 read: IOPS=1640, BW=6561KiB/s (6719kB/s)(6568KiB/1001msec) 00:26:28.651 slat (nsec): min=11352, max=42318, avg=14937.80, stdev=2962.40 00:26:28.651 clat (usec): min=177, max=513, avg=295.74, stdev=53.45 00:26:28.651 lat (usec): min=193, max=527, avg=310.68, stdev=53.20 00:26:28.651 clat percentiles (usec): 00:26:28.651 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 227], 00:26:28.651 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:26:28.651 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 359], 00:26:28.651 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 449], 99.95th=[ 515], 00:26:28.651 | 99.99th=[ 515] 00:26:28.651 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:26:28.651 slat (nsec): min=11816, max=89729, avg=21164.39, stdev=4279.62 00:26:28.651 clat (usec): min=129, max=627, avg=214.89, stdev=39.86 00:26:28.651 lat (usec): min=156, max=647, avg=236.06, stdev=39.24 00:26:28.651 clat percentiles (usec): 00:26:28.651 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 165], 00:26:28.651 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:26:28.651 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:26:28.651 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 326], 00:26:28.651 | 99.99th=[ 627] 00:26:28.651 bw ( KiB/s): min= 8192, max= 8192, per=21.28%, avg=8192.00, stdev= 0.00, samples=1 00:26:28.651 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:28.651 lat (usec) : 250=57.59%, 500=42.36%, 750=0.05% 00:26:28.651 cpu : usr=1.00%, sys=5.60%, ctx=3694, majf=0, minf=7 00:26:28.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.651 issued rwts: total=1642,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.651 00:26:28.651 Run status group 0 (all jobs): 00:26:28.651 READ: bw=31.6MiB/s (33.1MB/s), 6561KiB/s-9.99MiB/s (6719kB/s-10.5MB/s), io=31.6MiB (33.1MB), run=1001-1001msec 00:26:28.651 WRITE: bw=37.6MiB/s (39.4MB/s), 8184KiB/s-11.6MiB/s (8380kB/s-12.2MB/s), io=37.6MiB (39.5MB), run=1001-1001msec 00:26:28.651 00:26:28.651 Disk stats (read/write): 00:26:28.651 nvme0n1: ios=2243/2560, merge=0/0, ticks=493/376, in_queue=869, util=92.18% 00:26:28.651 nvme0n2: ios=1578/1706, merge=0/0, ticks=504/369, in_queue=873, util=92.20% 00:26:28.651 nvme0n3: ios=1933/2048, merge=0/0, ticks=419/375, in_queue=794, util=88.92% 00:26:28.651 nvme0n4: ios=1536/1630, merge=0/0, ticks=451/354, in_queue=805, util=89.67% 00:26:28.651 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:26:28.651 [global] 00:26:28.651 thread=1 00:26:28.651 invalidate=1 00:26:28.651 rw=randwrite 00:26:28.651 time_based=1 00:26:28.651 runtime=1 00:26:28.651 ioengine=libaio 00:26:28.651 direct=1 00:26:28.651 bs=4096 00:26:28.651 iodepth=1 00:26:28.651 norandommap=0 00:26:28.651 numjobs=1 00:26:28.651 00:26:28.651 verify_dump=1 00:26:28.651 verify_backlog=512 00:26:28.651 verify_state_save=0 00:26:28.651 do_verify=1 00:26:28.651 verify=crc32c-intel 00:26:28.651 [job0] 00:26:28.651 filename=/dev/nvme0n1 00:26:28.651 [job1] 00:26:28.651 filename=/dev/nvme0n2 00:26:28.651 [job2] 00:26:28.651 filename=/dev/nvme0n3 00:26:28.651 [job3] 00:26:28.651 filename=/dev/nvme0n4 00:26:28.651 Could not set queue depth (nvme0n1) 00:26:28.651 Could not set queue depth (nvme0n2) 00:26:28.651 Could not set queue depth (nvme0n3) 00:26:28.651 Could not set queue depth (nvme0n4) 00:26:28.651 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:28.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:28.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:28.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:28.651 fio-3.35 00:26:28.651 Starting 4 threads 00:26:30.026 00:26:30.026 job0: (groupid=0, jobs=1): err= 0: pid=108004: Mon Dec 16 11:46:35 2024 00:26:30.026 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:26:30.026 slat (nsec): min=8649, max=34771, avg=12917.59, stdev=2805.73 00:26:30.026 clat (usec): min=168, max=432, avg=254.84, stdev=23.76 00:26:30.026 lat (usec): min=181, max=446, avg=267.76, stdev=23.68 00:26:30.026 clat percentiles (usec): 00:26:30.026 | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 239], 20.00th=[ 245], 00:26:30.026 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:26:30.026 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:26:30.026 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 363], 99.95th=[ 383], 00:26:30.026 | 99.99th=[ 433] 00:26:30.026 write: IOPS=2111, BW=8448KiB/s (8650kB/s)(8456KiB/1001msec); 0 zone resets 00:26:30.026 slat (usec): min=14, max=133, avg=20.95, stdev= 4.76 00:26:30.026 clat (usec): min=108, max=481, avg=189.46, stdev=33.32 00:26:30.026 lat (usec): min=137, max=515, avg=210.41, stdev=32.68 00:26:30.026 clat percentiles (usec): 00:26:30.026 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 143], 00:26:30.026 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:26:30.026 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 225], 00:26:30.026 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 347], 99.95th=[ 478], 00:26:30.026 | 99.99th=[ 482] 00:26:30.026 bw ( KiB/s): min= 9280, max= 9280, per=28.70%, avg=9280.00, stdev= 0.00, samples=1 00:26:30.026 iops : min= 2320, max= 2320, avg=2320.00, stdev= 0.00, samples=1 00:26:30.026 lat (usec) : 250=67.18%, 500=32.82% 00:26:30.026 cpu : usr=1.00%, sys=6.00%, ctx=4165, majf=0, minf=7 00:26:30.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.026 issued rwts: total=2048,2114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:30.026 job1: (groupid=0, jobs=1): err= 0: pid=108005: Mon Dec 16 11:46:35 2024 00:26:30.026 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:26:30.027 slat (nsec): min=8596, max=32983, avg=11457.75, stdev=3254.06 00:26:30.027 clat (usec): min=170, max=557, avg=256.75, stdev=25.19 00:26:30.027 lat (usec): min=186, max=577, avg=268.21, stdev=24.60 00:26:30.027 clat percentiles (usec): 00:26:30.027 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 243], 20.00th=[ 247], 00:26:30.027 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 262], 00:26:30.027 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:26:30.027 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 371], 99.95th=[ 453], 00:26:30.027 | 99.99th=[ 562] 00:26:30.027 write: IOPS=2106, BW=8428KiB/s (8630kB/s)(8436KiB/1001msec); 0 zone resets 00:26:30.027 slat (usec): min=11, max=167, avg=21.90, stdev= 5.95 00:26:30.027 clat (usec): min=19, max=545, avg=188.62, stdev=34.53 00:26:30.027 lat (usec): min=140, max=567, avg=210.52, stdev=32.26 00:26:30.027 clat percentiles (usec): 00:26:30.027 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 141], 00:26:30.027 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:26:30.027 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 225], 00:26:30.027 | 99.00th=[ 237], 99.50th=[ 241], 99.90th=[ 285], 99.95th=[ 429], 00:26:30.027 | 99.99th=[ 545] 00:26:30.027 bw ( KiB/s): min= 9240, max= 9240, per=28.58%, avg=9240.00, stdev= 0.00, samples=1 00:26:30.027 iops : min= 2310, max= 2310, avg=2310.00, stdev= 0.00, samples=1 00:26:30.027 lat (usec) : 20=0.02%, 250=63.94%, 500=35.99%, 750=0.05% 00:26:30.027 cpu : usr=1.00%, sys=5.80%, ctx=4161, majf=0, minf=11 00:26:30.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.027 issued rwts: total=2048,2109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:30.027 job2: (groupid=0, jobs=1): err= 0: pid=108006: Mon Dec 16 11:46:35 2024 00:26:30.027 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:26:30.027 slat (nsec): min=12791, max=85776, avg=18305.62, stdev=5780.89 00:26:30.027 clat (usec): min=195, max=648, avg=305.42, stdev=29.49 00:26:30.027 lat (usec): min=217, max=677, avg=323.72, stdev=30.77 00:26:30.027 clat percentiles (usec): 00:26:30.027 | 1.00th=[ 208], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:26:30.027 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 302], 60.00th=[ 306], 00:26:30.027 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:26:30.027 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 529], 99.95th=[ 652], 00:26:30.027 | 99.99th=[ 652] 00:26:30.027 write: IOPS=1928, BW=7712KiB/s (7897kB/s)(7720KiB/1001msec); 0 zone resets 00:26:30.027 slat (usec): min=19, max=129, avg=28.61, stdev= 7.06 00:26:30.027 clat (usec): min=134, max=1549, avg=228.06, stdev=35.43 00:26:30.027 lat (usec): min=161, max=1572, avg=256.66, stdev=35.86 00:26:30.027 clat percentiles (usec): 00:26:30.027 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:26:30.027 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:26:30.027 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:26:30.027 | 99.00th=[ 273], 99.50th=[ 338], 99.90th=[ 469], 99.95th=[ 1549], 00:26:30.027 | 99.99th=[ 1549] 00:26:30.027 bw ( KiB/s): min= 8192, max= 8192, per=25.34%, avg=8192.00, stdev= 0.00, samples=1 00:26:30.027 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:30.027 lat (usec) : 250=53.78%, 500=46.11%, 750=0.09% 00:26:30.027 lat (msec) : 2=0.03% 00:26:30.027 cpu : usr=1.00%, sys=6.80%, ctx=3478, majf=0, minf=13 00:26:30.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.027 issued rwts: total=1536,1930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:30.027 job3: (groupid=0, jobs=1): err= 0: pid=108007: Mon Dec 16 11:46:35 2024 00:26:30.027 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:26:30.027 slat (nsec): min=13526, max=44893, avg=16469.20, stdev=3271.00 00:26:30.027 clat (usec): min=187, max=786, avg=307.23, stdev=41.98 00:26:30.027 lat (usec): min=204, max=804, avg=323.70, stdev=42.35 00:26:30.027 clat percentiles (usec): 00:26:30.027 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 289], 20.00th=[ 293], 00:26:30.027 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:26:30.027 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 404], 00:26:30.027 | 99.00th=[ 433], 99.50th=[ 437], 99.90th=[ 635], 99.95th=[ 791], 00:26:30.027 | 99.99th=[ 791] 00:26:30.027 write: IOPS=1936, BW=7744KiB/s (7930kB/s)(7752KiB/1001msec); 0 zone resets 00:26:30.027 slat (nsec): min=19945, max=95464, avg=27925.69, stdev=7909.68 00:26:30.027 clat (usec): min=124, max=516, avg=227.89, stdev=21.35 00:26:30.027 lat (usec): min=149, max=542, avg=255.82, stdev=21.62 00:26:30.027 clat percentiles (usec): 00:26:30.027 | 1.00th=[ 182], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 217], 00:26:30.027 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 229], 00:26:30.027 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 253], 00:26:30.027 | 99.00th=[ 281], 99.50th=[ 351], 99.90th=[ 474], 99.95th=[ 519], 00:26:30.027 | 99.99th=[ 519] 00:26:30.027 bw ( KiB/s): min= 8192, max= 8192, per=25.34%, avg=8192.00, stdev= 0.00, samples=1 00:26:30.027 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:30.027 lat (usec) : 250=54.40%, 500=45.42%, 750=0.14%, 1000=0.03% 00:26:30.027 cpu : usr=1.70%, sys=5.60%, ctx=3474, majf=0, minf=13 00:26:30.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.027 issued rwts: total=1536,1938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:30.027 00:26:30.027 Run status group 0 (all jobs): 00:26:30.027 READ: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:26:30.027 WRITE: bw=31.6MiB/s (33.1MB/s), 7712KiB/s-8448KiB/s (7897kB/s-8650kB/s), io=31.6MiB (33.1MB), run=1001-1001msec 00:26:30.027 00:26:30.027 Disk stats (read/write): 00:26:30.027 nvme0n1: ios=1693/2048, merge=0/0, ticks=440/400, in_queue=840, util=89.78% 00:26:30.027 nvme0n2: ios=1688/2048, merge=0/0, ticks=430/408, in_queue=838, util=90.21% 00:26:30.027 nvme0n3: ios=1443/1536, merge=0/0, ticks=446/373, in_queue=819, util=89.36% 00:26:30.027 nvme0n4: ios=1450/1536, merge=0/0, ticks=449/366, in_queue=815, util=89.81% 00:26:30.027 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:26:30.027 [global] 00:26:30.027 thread=1 00:26:30.027 invalidate=1 00:26:30.027 rw=write 00:26:30.027 time_based=1 00:26:30.027 runtime=1 00:26:30.027 ioengine=libaio 00:26:30.027 direct=1 00:26:30.027 bs=4096 00:26:30.027 iodepth=128 00:26:30.027 norandommap=0 00:26:30.027 numjobs=1 00:26:30.027 00:26:30.027 verify_dump=1 00:26:30.027 verify_backlog=512 00:26:30.027 verify_state_save=0 00:26:30.027 do_verify=1 00:26:30.027 verify=crc32c-intel 00:26:30.027 [job0] 00:26:30.027 filename=/dev/nvme0n1 00:26:30.027 [job1] 00:26:30.027 filename=/dev/nvme0n2 00:26:30.027 [job2] 00:26:30.027 filename=/dev/nvme0n3 00:26:30.027 [job3] 00:26:30.027 filename=/dev/nvme0n4 00:26:30.027 Could not set queue depth (nvme0n1) 00:26:30.027 Could not set queue depth (nvme0n2) 00:26:30.027 Could not set queue depth (nvme0n3) 00:26:30.027 Could not set queue depth (nvme0n4) 00:26:30.027 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:30.027 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:30.027 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:30.027 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:30.027 fio-3.35 00:26:30.027 Starting 4 threads 00:26:31.401 00:26:31.401 job0: (groupid=0, jobs=1): err= 0: pid=108062: Mon Dec 16 11:46:36 2024 00:26:31.401 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:26:31.401 slat (usec): min=9, max=9169, avg=166.14, stdev=860.08 00:26:31.401 clat (usec): min=10757, max=42594, avg=21219.51, stdev=6700.06 00:26:31.401 lat (usec): min=12292, max=42608, avg=21385.64, stdev=6717.47 00:26:31.401 clat percentiles (usec): 00:26:31.401 | 1.00th=[13435], 5.00th=[13698], 10.00th=[13829], 20.00th=[14091], 00:26:31.401 | 30.00th=[15008], 40.00th=[16909], 50.00th=[20579], 60.00th=[23200], 00:26:31.401 | 70.00th=[25035], 80.00th=[28181], 90.00th=[30016], 95.00th=[32900], 00:26:31.401 | 99.00th=[35390], 99.50th=[39584], 99.90th=[42730], 99.95th=[42730], 00:26:31.401 | 99.99th=[42730] 00:26:31.401 write: IOPS=2847, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1003msec); 0 zone resets 00:26:31.401 slat (usec): min=8, max=6740, avg=193.15, stdev=743.87 00:26:31.401 clat (usec): min=1140, max=50606, avg=25269.12, stdev=11375.21 00:26:31.401 lat (usec): min=3818, max=50632, avg=25462.27, stdev=11431.30 00:26:31.401 clat percentiles (usec): 00:26:31.401 | 1.00th=[ 4555], 5.00th=[12911], 10.00th=[13304], 20.00th=[15008], 00:26:31.401 | 30.00th=[18482], 40.00th=[20055], 50.00th=[23200], 60.00th=[24773], 00:26:31.401 | 70.00th=[28181], 80.00th=[32637], 90.00th=[46924], 95.00th=[48497], 00:26:31.401 | 99.00th=[50070], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:26:31.401 | 99.99th=[50594] 00:26:31.401 bw ( KiB/s): min= 9536, max=12288, per=20.53%, avg=10912.00, stdev=1945.96, samples=2 00:26:31.401 iops : min= 2384, max= 3072, avg=2728.00, stdev=486.49, samples=2 00:26:31.401 lat (msec) : 2=0.02%, 4=0.11%, 10=1.07%, 20=43.08%, 50=54.95% 00:26:31.401 lat (msec) : 100=0.78% 00:26:31.401 cpu : usr=2.40%, sys=8.98%, ctx=286, majf=0, minf=11 00:26:31.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:31.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:31.401 issued rwts: total=2560,2856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:31.402 job1: (groupid=0, jobs=1): err= 0: pid=108063: Mon Dec 16 11:46:36 2024 00:26:31.402 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:26:31.402 slat (usec): min=2, max=10925, avg=229.56, stdev=1034.11 00:26:31.402 clat (usec): min=15691, max=45816, avg=28114.66, stdev=5451.35 00:26:31.402 lat (usec): min=15707, max=45835, avg=28344.22, stdev=5444.84 00:26:31.402 clat percentiles (usec): 00:26:31.402 | 1.00th=[18220], 5.00th=[20579], 10.00th=[22938], 20.00th=[23725], 00:26:31.402 | 30.00th=[24773], 40.00th=[25822], 50.00th=[26870], 60.00th=[28705], 00:26:31.402 | 70.00th=[29754], 80.00th=[32637], 90.00th=[35914], 95.00th=[38536], 00:26:31.402 | 99.00th=[45351], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:26:31.402 | 99.99th=[45876] 00:26:31.402 write: IOPS=2478, BW=9913KiB/s (10.1MB/s)(9972KiB/1006msec); 0 zone resets 00:26:31.402 slat (usec): min=8, max=8082, avg=206.03, stdev=827.21 00:26:31.402 clat (usec): min=2607, max=55934, avg=27982.13, stdev=12608.40 00:26:31.402 lat (usec): min=4462, max=55956, avg=28188.16, stdev=12677.01 00:26:31.402 clat percentiles (usec): 00:26:31.402 | 1.00th=[ 7373], 5.00th=[ 8356], 10.00th=[10028], 20.00th=[17957], 00:26:31.402 | 30.00th=[22414], 40.00th=[24773], 50.00th=[26608], 60.00th=[29492], 00:26:31.402 | 70.00th=[33424], 80.00th=[38536], 90.00th=[47449], 95.00th=[52167], 00:26:31.402 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:26:31.402 | 99.99th=[55837] 00:26:31.402 bw ( KiB/s): min= 9192, max= 9747, per=17.81%, avg=9469.50, stdev=392.44, samples=2 00:26:31.402 iops : min= 2298, max= 2436, avg=2367.00, stdev=97.58, samples=2 00:26:31.402 lat (msec) : 4=0.02%, 10=5.40%, 20=9.43%, 50=81.19%, 100=3.96% 00:26:31.402 cpu : usr=2.99%, sys=6.57%, ctx=510, majf=0, minf=17 00:26:31.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:31.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:31.402 issued rwts: total=2048,2493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:31.402 job2: (groupid=0, jobs=1): err= 0: pid=108064: Mon Dec 16 11:46:36 2024 00:26:31.402 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:26:31.402 slat (usec): min=5, max=4790, avg=84.89, stdev=379.01 00:26:31.402 clat (usec): min=8553, max=14184, avg=11150.01, stdev=764.89 00:26:31.402 lat (usec): min=8851, max=15437, avg=11234.90, stdev=697.70 00:26:31.402 clat percentiles (usec): 00:26:31.402 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10945], 00:26:31.402 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11207], 00:26:31.402 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[12125], 00:26:31.402 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:26:31.402 | 99.99th=[14222] 00:26:31.402 write: IOPS=5864, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1002msec); 0 zone resets 00:26:31.402 slat (usec): min=10, max=3087, avg=81.70, stdev=306.40 00:26:31.402 clat (usec): min=304, max=15017, avg=10850.67, stdev=1336.46 00:26:31.402 lat (usec): min=2819, max=15039, avg=10932.37, stdev=1332.73 00:26:31.402 clat percentiles (usec): 00:26:31.402 | 1.00th=[ 6194], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:26:31.402 | 30.00th=[10421], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:26:31.402 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12649], 00:26:31.402 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15008], 99.95th=[15008], 00:26:31.402 | 99.99th=[15008] 00:26:31.402 bw ( KiB/s): min=21672, max=24320, per=43.26%, avg=22996.00, stdev=1872.42, samples=2 00:26:31.402 iops : min= 5418, max= 6080, avg=5749.00, stdev=468.10, samples=2 00:26:31.402 lat (usec) : 500=0.01% 00:26:31.402 lat (msec) : 4=0.33%, 10=16.31%, 20=83.35% 00:26:31.402 cpu : usr=4.80%, sys=14.99%, ctx=777, majf=0, minf=11 00:26:31.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:31.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:31.402 issued rwts: total=5632,5876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:31.402 job3: (groupid=0, jobs=1): err= 0: pid=108065: Mon Dec 16 11:46:36 2024 00:26:31.402 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:26:31.402 slat (usec): min=4, max=10473, avg=227.87, stdev=1033.52 00:26:31.402 clat (usec): min=18938, max=44098, avg=29297.98, stdev=4842.77 00:26:31.402 lat (usec): min=21166, max=47253, avg=29525.85, stdev=4795.78 00:26:31.402 clat percentiles (usec): 00:26:31.402 | 1.00th=[21103], 5.00th=[22938], 10.00th=[23462], 20.00th=[24773], 00:26:31.402 | 30.00th=[25822], 40.00th=[27919], 50.00th=[28705], 60.00th=[29754], 00:26:31.402 | 70.00th=[31589], 80.00th=[33424], 90.00th=[36439], 95.00th=[38011], 00:26:31.402 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:26:31.402 | 99.99th=[44303] 00:26:31.402 write: IOPS=2131, BW=8525KiB/s (8729kB/s)(8576KiB/1006msec); 0 zone resets 00:26:31.402 slat (usec): min=11, max=7459, avg=240.97, stdev=861.97 00:26:31.402 clat (usec): min=4061, max=54832, avg=31020.63, stdev=9551.51 00:26:31.402 lat (usec): min=6541, max=54856, avg=31261.60, stdev=9579.49 00:26:31.402 clat percentiles (usec): 00:26:31.402 | 1.00th=[16909], 5.00th=[21103], 10.00th=[22676], 20.00th=[23725], 00:26:31.402 | 30.00th=[24511], 40.00th=[26084], 50.00th=[27919], 60.00th=[30278], 00:26:31.402 | 70.00th=[32637], 80.00th=[38011], 90.00th=[47973], 95.00th=[52167], 00:26:31.402 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:26:31.402 | 99.99th=[54789] 00:26:31.402 bw ( KiB/s): min= 8192, max= 8208, per=15.43%, avg=8200.00, stdev=11.31, samples=2 00:26:31.402 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:26:31.402 lat (msec) : 10=0.21%, 20=1.46%, 50=94.01%, 100=4.32% 00:26:31.402 cpu : usr=1.79%, sys=7.06%, ctx=614, majf=0, minf=13 00:26:31.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:31.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:31.402 issued rwts: total=2048,2144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:31.402 00:26:31.402 Run status group 0 (all jobs): 00:26:31.402 READ: bw=47.7MiB/s (50.0MB/s), 8143KiB/s-22.0MiB/s (8339kB/s-23.0MB/s), io=48.0MiB (50.3MB), run=1002-1006msec 00:26:31.402 WRITE: bw=51.9MiB/s (54.4MB/s), 8525KiB/s-22.9MiB/s (8729kB/s-24.0MB/s), io=52.2MiB (54.8MB), run=1002-1006msec 00:26:31.402 00:26:31.402 Disk stats (read/write): 00:26:31.402 nvme0n1: ios=2098/2471, merge=0/0, ticks=10001/15642, in_queue=25643, util=87.78% 00:26:31.402 nvme0n2: ios=1946/2048, merge=0/0, ticks=13587/15651, in_queue=29238, util=89.56% 00:26:31.402 nvme0n3: ios=4767/5120, merge=0/0, ticks=12365/11954, in_queue=24319, util=89.14% 00:26:31.402 nvme0n4: ios=1600/2048, merge=0/0, ticks=10563/15134, in_queue=25697, util=89.69% 00:26:31.402 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:26:31.402 [global] 00:26:31.402 thread=1 00:26:31.402 invalidate=1 00:26:31.402 rw=randwrite 00:26:31.402 time_based=1 00:26:31.402 runtime=1 00:26:31.402 ioengine=libaio 00:26:31.402 direct=1 00:26:31.402 bs=4096 00:26:31.402 iodepth=128 00:26:31.402 norandommap=0 00:26:31.402 numjobs=1 00:26:31.402 00:26:31.402 verify_dump=1 00:26:31.402 verify_backlog=512 00:26:31.402 verify_state_save=0 00:26:31.402 do_verify=1 00:26:31.402 verify=crc32c-intel 00:26:31.402 [job0] 00:26:31.402 filename=/dev/nvme0n1 00:26:31.402 [job1] 00:26:31.402 filename=/dev/nvme0n2 00:26:31.402 [job2] 00:26:31.402 filename=/dev/nvme0n3 00:26:31.402 [job3] 00:26:31.402 filename=/dev/nvme0n4 00:26:31.402 Could not set queue depth (nvme0n1) 00:26:31.402 Could not set queue depth (nvme0n2) 00:26:31.402 Could not set queue depth (nvme0n3) 00:26:31.402 Could not set queue depth (nvme0n4) 00:26:31.402 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:31.402 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:31.402 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:31.402 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:31.402 fio-3.35 00:26:31.402 Starting 4 threads 00:26:32.782 00:26:32.782 job0: (groupid=0, jobs=1): err= 0: pid=108119: Mon Dec 16 11:46:37 2024 00:26:32.782 read: IOPS=5706, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1008msec) 00:26:32.782 slat (usec): min=5, max=9486, avg=77.89, stdev=535.12 00:26:32.782 clat (usec): min=4174, max=24578, avg=11034.25, stdev=2216.15 00:26:32.782 lat (usec): min=5014, max=24597, avg=11112.14, stdev=2237.13 00:26:32.782 clat percentiles (usec): 00:26:32.782 | 1.00th=[ 5800], 5.00th=[ 7635], 10.00th=[ 8356], 20.00th=[ 9372], 00:26:32.782 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11338], 00:26:32.782 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13698], 95.00th=[14746], 00:26:32.782 | 99.00th=[17433], 99.50th=[18744], 99.90th=[24511], 99.95th=[24511], 00:26:32.782 | 99.99th=[24511] 00:26:32.782 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:26:32.782 slat (usec): min=5, max=8491, avg=83.08, stdev=632.87 00:26:32.782 clat (usec): min=2595, max=19912, avg=10461.82, stdev=1473.13 00:26:32.782 lat (usec): min=2614, max=20109, avg=10544.90, stdev=1589.54 00:26:32.782 clat percentiles (usec): 00:26:32.782 | 1.00th=[ 5604], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9765], 00:26:32.782 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:26:32.782 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11731], 95.00th=[12387], 00:26:32.782 | 99.00th=[15401], 99.50th=[17171], 99.90th=[19006], 99.95th=[19006], 00:26:32.782 | 99.99th=[19792] 00:26:32.782 bw ( KiB/s): min=24520, max=24576, per=36.37%, avg=24548.00, stdev=39.60, samples=2 00:26:32.782 iops : min= 6130, max= 6144, avg=6137.00, stdev= 9.90, samples=2 00:26:32.782 lat (msec) : 4=0.08%, 10=26.31%, 20=73.52%, 50=0.08% 00:26:32.782 cpu : usr=3.97%, sys=14.60%, ctx=353, majf=0, minf=10 00:26:32.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:32.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:32.782 issued rwts: total=5752,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:32.782 job1: (groupid=0, jobs=1): err= 0: pid=108120: Mon Dec 16 11:46:37 2024 00:26:32.782 read: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec) 00:26:32.782 slat (usec): min=6, max=13388, avg=123.81, stdev=827.02 00:26:32.782 clat (usec): min=6277, max=54567, avg=15999.19, stdev=6394.96 00:26:32.782 lat (usec): min=6290, max=54596, avg=16123.00, stdev=6472.05 00:26:32.782 clat percentiles (usec): 00:26:32.782 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11600], 20.00th=[12387], 00:26:32.782 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14484], 60.00th=[15008], 00:26:32.782 | 70.00th=[15664], 80.00th=[17695], 90.00th=[20841], 95.00th=[26084], 00:26:32.782 | 99.00th=[47449], 99.50th=[51119], 99.90th=[54264], 99.95th=[54789], 00:26:32.782 | 99.99th=[54789] 00:26:32.782 write: IOPS=3769, BW=14.7MiB/s (15.4MB/s)(15.0MiB/1016msec); 0 zone resets 00:26:32.782 slat (usec): min=5, max=11519, avg=137.69, stdev=796.94 00:26:32.782 clat (usec): min=5526, max=55263, avg=18651.36, stdev=11486.47 00:26:32.782 lat (usec): min=5552, max=55274, avg=18789.05, stdev=11567.28 00:26:32.782 clat percentiles (usec): 00:26:32.782 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10814], 00:26:32.782 | 30.00th=[11600], 40.00th=[13042], 50.00th=[13960], 60.00th=[14222], 00:26:32.782 | 70.00th=[15533], 80.00th=[24249], 90.00th=[43254], 95.00th=[44827], 00:26:32.782 | 99.00th=[46400], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:26:32.782 | 99.99th=[55313] 00:26:32.782 bw ( KiB/s): min= 9144, max=20472, per=21.94%, avg=14808.00, stdev=8010.11, samples=2 00:26:32.782 iops : min= 2286, max= 5118, avg=3702.00, stdev=2002.53, samples=2 00:26:32.782 lat (msec) : 10=4.21%, 20=75.64%, 50=19.45%, 100=0.70% 00:26:32.782 cpu : usr=2.96%, sys=9.95%, ctx=309, majf=0, minf=11 00:26:32.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:32.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:32.782 issued rwts: total=3584,3830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:32.782 job2: (groupid=0, jobs=1): err= 0: pid=108126: Mon Dec 16 11:46:37 2024 00:26:32.782 read: IOPS=1840, BW=7362KiB/s (7539kB/s)(7384KiB/1003msec) 00:26:32.782 slat (usec): min=11, max=24108, avg=227.06, stdev=1324.57 00:26:32.782 clat (usec): min=2481, max=95289, avg=29187.85, stdev=17386.86 00:26:32.782 lat (usec): min=2495, max=97784, avg=29414.91, stdev=17492.23 00:26:32.782 clat percentiles (usec): 00:26:32.782 | 1.00th=[ 7504], 5.00th=[13829], 10.00th=[16712], 20.00th=[16909], 00:26:32.782 | 30.00th=[17171], 40.00th=[17433], 50.00th=[18220], 60.00th=[24773], 00:26:32.782 | 70.00th=[37487], 80.00th=[44827], 90.00th=[53216], 95.00th=[63177], 00:26:32.782 | 99.00th=[86508], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897], 00:26:32.782 | 99.99th=[94897] 00:26:32.782 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:26:32.782 slat (usec): min=10, max=24176, avg=274.97, stdev=1664.17 00:26:32.782 clat (msec): min=11, max=112, avg=35.15, stdev=22.35 00:26:32.782 lat (msec): min=11, max=112, avg=35.42, stdev=22.53 00:26:32.782 clat percentiles (msec): 00:26:32.782 | 1.00th=[ 16], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:26:32.782 | 30.00th=[ 17], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 35], 00:26:32.782 | 70.00th=[ 48], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 78], 00:26:32.782 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 113], 99.95th=[ 113], 00:26:32.782 | 99.99th=[ 113] 00:26:32.782 bw ( KiB/s): min= 4087, max=12288, per=12.13%, avg=8187.50, stdev=5798.98, samples=2 00:26:32.782 iops : min= 1021, max= 3072, avg=2046.50, stdev=1450.28, samples=2 00:26:32.782 lat (msec) : 4=0.13%, 10=1.36%, 20=42.17%, 50=35.82%, 100=18.90% 00:26:32.782 lat (msec) : 250=1.62% 00:26:32.782 cpu : usr=1.30%, sys=6.99%, ctx=221, majf=0, minf=15 00:26:32.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:32.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:32.782 issued rwts: total=1846,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:32.782 job3: (groupid=0, jobs=1): err= 0: pid=108128: Mon Dec 16 11:46:37 2024 00:26:32.782 read: IOPS=4725, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1004msec) 00:26:32.782 slat (usec): min=4, max=3095, avg=99.43, stdev=470.67 00:26:32.782 clat (usec): min=1096, max=16226, avg=13062.83, stdev=1132.86 00:26:32.783 lat (usec): min=3971, max=17165, avg=13162.26, stdev=1047.72 00:26:32.783 clat percentiles (usec): 00:26:32.783 | 1.00th=[ 7504], 5.00th=[10945], 10.00th=[12518], 20.00th=[12911], 00:26:32.783 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:26:32.783 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[13960], 00:26:32.783 | 99.00th=[14615], 99.50th=[14746], 99.90th=[16188], 99.95th=[16188], 00:26:32.783 | 99.99th=[16188] 00:26:32.783 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:26:32.783 slat (usec): min=10, max=3048, avg=96.16, stdev=427.74 00:26:32.783 clat (usec): min=9806, max=16035, avg=12644.99, stdev=1253.20 00:26:32.783 lat (usec): min=9829, max=16373, avg=12741.16, stdev=1233.48 00:26:32.783 clat percentiles (usec): 00:26:32.783 | 1.00th=[10421], 5.00th=[10683], 10.00th=[10814], 20.00th=[11207], 00:26:32.783 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13173], 60.00th=[13304], 00:26:32.783 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14091], 95.00th=[14353], 00:26:32.783 | 99.00th=[14746], 99.50th=[14746], 99.90th=[14877], 99.95th=[16057], 00:26:32.783 | 99.99th=[16057] 00:26:32.783 bw ( KiB/s): min=20480, max=20480, per=30.35%, avg=20480.00, stdev= 0.00, samples=2 00:26:32.783 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:26:32.783 lat (msec) : 2=0.01%, 4=0.02%, 10=0.85%, 20=99.12% 00:26:32.783 cpu : usr=5.08%, sys=12.06%, ctx=422, majf=0, minf=13 00:26:32.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:32.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:32.783 issued rwts: total=4744,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:32.783 00:26:32.783 Run status group 0 (all jobs): 00:26:32.783 READ: bw=61.2MiB/s (64.2MB/s), 7362KiB/s-22.3MiB/s (7539kB/s-23.4MB/s), io=62.2MiB (65.2MB), run=1003-1016msec 00:26:32.783 WRITE: bw=65.9MiB/s (69.1MB/s), 8167KiB/s-23.8MiB/s (8364kB/s-25.0MB/s), io=67.0MiB (70.2MB), run=1003-1016msec 00:26:32.783 00:26:32.783 Disk stats (read/write): 00:26:32.783 nvme0n1: ios=4976/5120, merge=0/0, ticks=50717/49400, in_queue=100117, util=86.63% 00:26:32.783 nvme0n2: ios=3085/3438, merge=0/0, ticks=44347/54716, in_queue=99063, util=87.56% 00:26:32.783 nvme0n3: ios=1459/1536, merge=0/0, ticks=15127/19173, in_queue=34300, util=89.11% 00:26:32.783 nvme0n4: ios=4096/4233, merge=0/0, ticks=12425/11705, in_queue=24130, util=89.47% 00:26:32.783 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:26:32.783 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=108141 00:26:32.783 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:26:32.783 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:26:32.783 [global] 00:26:32.783 thread=1 00:26:32.783 invalidate=1 00:26:32.783 rw=read 00:26:32.783 time_based=1 00:26:32.783 runtime=10 00:26:32.783 ioengine=libaio 00:26:32.783 direct=1 00:26:32.783 bs=4096 00:26:32.783 iodepth=1 00:26:32.783 norandommap=1 00:26:32.783 numjobs=1 00:26:32.783 00:26:32.783 [job0] 00:26:32.783 filename=/dev/nvme0n1 00:26:32.783 [job1] 00:26:32.783 filename=/dev/nvme0n2 00:26:32.783 [job2] 00:26:32.783 filename=/dev/nvme0n3 00:26:32.783 [job3] 00:26:32.783 filename=/dev/nvme0n4 00:26:32.783 Could not set queue depth (nvme0n1) 00:26:32.783 Could not set queue depth (nvme0n2) 00:26:32.783 Could not set queue depth (nvme0n3) 00:26:32.783 Could not set queue depth (nvme0n4) 00:26:32.783 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:32.783 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:32.783 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:32.783 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:32.783 fio-3.35 00:26:32.783 Starting 4 threads 00:26:36.077 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:26:36.077 fio: pid=108188, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:36.077 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47570944, buflen=4096 00:26:36.077 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:26:36.335 fio: pid=108187, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:36.335 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=54362112, buflen=4096 00:26:36.335 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:36.335 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:26:36.610 fio: pid=108185, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:36.610 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=32837632, buflen=4096 00:26:36.610 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:36.610 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:26:36.887 fio: pid=108186, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:36.887 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=60637184, buflen=4096 00:26:36.887 00:26:36.887 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108185: Mon Dec 16 11:46:42 2024 00:26:36.887 read: IOPS=2226, BW=8905KiB/s (9119kB/s)(31.3MiB/3601msec) 00:26:36.887 slat (usec): min=8, max=13934, avg=26.62, stdev=247.91 00:26:36.887 clat (usec): min=162, max=282706, avg=420.17, stdev=3189.22 00:26:36.887 lat (usec): min=176, max=282728, avg=446.79, stdev=3198.67 00:26:36.887 clat percentiles (usec): 00:26:36.887 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 265], 00:26:36.887 | 30.00th=[ 302], 40.00th=[ 429], 50.00th=[ 441], 60.00th=[ 449], 00:26:36.887 | 70.00th=[ 457], 80.00th=[ 465], 90.00th=[ 474], 95.00th=[ 482], 00:26:36.887 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 1221], 99.95th=[ 2737], 00:26:36.887 | 99.99th=[283116] 00:26:36.887 bw ( KiB/s): min= 4288, max= 9216, per=16.24%, avg=7978.67, stdev=1836.63, samples=6 00:26:36.887 iops : min= 1072, max= 2304, avg=1994.67, stdev=459.16, samples=6 00:26:36.887 lat (usec) : 250=16.75%, 500=82.65%, 750=0.42%, 1000=0.02% 00:26:36.887 lat (msec) : 2=0.07%, 4=0.04%, 50=0.01%, 500=0.01% 00:26:36.887 cpu : usr=0.92%, sys=4.03%, ctx=8025, majf=0, minf=1 00:26:36.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.887 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.887 issued rwts: total=8018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:36.887 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108186: Mon Dec 16 11:46:42 2024 00:26:36.888 read: IOPS=3810, BW=14.9MiB/s (15.6MB/s)(57.8MiB/3885msec) 00:26:36.888 slat (usec): min=9, max=17412, avg=18.62, stdev=236.40 00:26:36.888 clat (usec): min=78, max=28866, avg=242.23, stdev=245.09 00:26:36.888 lat (usec): min=174, max=28879, avg=260.85, stdev=341.28 00:26:36.888 clat percentiles (usec): 00:26:36.888 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 196], 00:26:36.888 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 265], 00:26:36.888 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 310], 00:26:36.888 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 523], 99.95th=[ 717], 00:26:36.888 | 99.99th=[ 4359] 00:26:36.888 bw ( KiB/s): min=12800, max=17920, per=30.65%, avg=15057.86, stdev=2176.86, samples=7 00:26:36.888 iops : min= 3200, max= 4480, avg=3764.43, stdev=544.20, samples=7 00:26:36.888 lat (usec) : 100=0.01%, 250=59.69%, 500=40.19%, 750=0.07%, 1000=0.01% 00:26:36.888 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.01% 00:26:36.888 cpu : usr=1.06%, sys=4.81%, ctx=14815, majf=0, minf=2 00:26:36.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.888 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.888 issued rwts: total=14805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:36.888 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108187: Mon Dec 16 11:46:42 2024 00:26:36.888 read: IOPS=3972, BW=15.5MiB/s (16.3MB/s)(51.8MiB/3341msec) 00:26:36.888 slat (usec): min=11, max=12195, avg=16.53, stdev=135.36 00:26:36.888 clat (usec): min=167, max=41707, avg=233.70, stdev=363.47 00:26:36.888 lat (usec): min=180, max=41723, avg=250.23, stdev=388.06 00:26:36.888 clat percentiles (usec): 00:26:36.888 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:26:36.888 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 235], 60.00th=[ 251], 00:26:36.888 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 293], 00:26:36.888 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 562], 99.95th=[ 873], 00:26:36.888 | 99.99th=[ 2057] 00:26:36.888 bw ( KiB/s): min=15056, max=17664, per=33.22%, avg=16318.67, stdev=905.43, samples=6 00:26:36.888 iops : min= 3764, max= 4416, avg=4079.67, stdev=226.36, samples=6 00:26:36.888 lat (usec) : 250=58.16%, 500=41.73%, 750=0.04%, 1000=0.02% 00:26:36.888 lat (msec) : 2=0.02%, 4=0.02%, 50=0.01% 00:26:36.888 cpu : usr=0.99%, sys=4.82%, ctx=13278, majf=0, minf=2 00:26:36.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.888 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.888 issued rwts: total=13273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:36.888 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108188: Mon Dec 16 11:46:42 2024 00:26:36.888 read: IOPS=3811, BW=14.9MiB/s (15.6MB/s)(45.4MiB/3047msec) 00:26:36.888 slat (nsec): min=9170, max=97216, avg=13900.28, stdev=3369.12 00:26:36.888 clat (usec): min=168, max=2070, avg=246.83, stdev=56.18 00:26:36.888 lat (usec): min=182, max=2084, avg=260.73, stdev=55.25 00:26:36.888 clat percentiles (usec): 00:26:36.888 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:26:36.888 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 269], 60.00th=[ 281], 00:26:36.888 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:26:36.888 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 408], 99.95th=[ 529], 00:26:36.888 | 99.99th=[ 1729] 00:26:36.888 bw ( KiB/s): min=12800, max=18960, per=31.19%, avg=15318.67, stdev=2863.98, samples=6 00:26:36.888 iops : min= 3200, max= 4740, avg=3829.67, stdev=715.99, samples=6 00:26:36.888 lat (usec) : 250=49.13%, 500=50.81%, 750=0.03%, 1000=0.01% 00:26:36.888 lat (msec) : 2=0.01%, 4=0.01% 00:26:36.888 cpu : usr=0.72%, sys=4.79%, ctx=11619, majf=0, minf=2 00:26:36.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.888 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.888 issued rwts: total=11615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:36.888 00:26:36.888 Run status group 0 (all jobs): 00:26:36.888 READ: bw=48.0MiB/s (50.3MB/s), 8905KiB/s-15.5MiB/s (9119kB/s-16.3MB/s), io=186MiB (195MB), run=3047-3885msec 00:26:36.888 00:26:36.888 Disk stats (read/write): 00:26:36.888 nvme0n1: ios=6867/0, merge=0/0, ticks=3178/0, in_queue=3178, util=95.11% 00:26:36.888 nvme0n2: ios=14728/0, merge=0/0, ticks=3548/0, in_queue=3548, util=95.51% 00:26:36.888 nvme0n3: ios=12657/0, merge=0/0, ticks=2958/0, in_queue=2958, util=96.31% 00:26:36.888 nvme0n4: ios=11051/0, merge=0/0, ticks=2670/0, in_queue=2670, util=96.69% 00:26:36.888 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:36.888 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:26:37.146 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:37.146 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:26:37.404 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:37.404 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:26:37.663 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:37.663 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:26:37.922 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:37.922 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 108141 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:38.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:26:38.490 nvmf hotplug test: fio failed as expected 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:26:38.490 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.749 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.749 rmmod nvme_tcp 00:26:38.749 rmmod nvme_fabrics 00:26:38.749 rmmod nvme_keyring 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 107657 ']' 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 107657 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 107657 ']' 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 107657 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107657 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107657' 00:26:38.749 killing process with pid 107657 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 107657 00:26:38.749 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 107657 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:39.007 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:26:39.266 00:26:39.266 real 0m20.613s 00:26:39.266 user 1m0.917s 00:26:39.266 sys 0m11.319s 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.266 ************************************ 00:26:39.266 END TEST nvmf_fio_target 00:26:39.266 ************************************ 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:39.266 ************************************ 00:26:39.266 START TEST nvmf_bdevio 00:26:39.266 ************************************ 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:39.266 * Looking for test storage... 00:26:39.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:26:39.266 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.525 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:39.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.526 --rc genhtml_branch_coverage=1 00:26:39.526 --rc genhtml_function_coverage=1 00:26:39.526 --rc genhtml_legend=1 00:26:39.526 --rc geninfo_all_blocks=1 00:26:39.526 --rc geninfo_unexecuted_blocks=1 00:26:39.526 00:26:39.526 ' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:39.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.526 --rc genhtml_branch_coverage=1 00:26:39.526 --rc genhtml_function_coverage=1 00:26:39.526 --rc genhtml_legend=1 00:26:39.526 --rc geninfo_all_blocks=1 00:26:39.526 --rc geninfo_unexecuted_blocks=1 00:26:39.526 00:26:39.526 ' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:39.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.526 --rc genhtml_branch_coverage=1 00:26:39.526 --rc genhtml_function_coverage=1 00:26:39.526 --rc genhtml_legend=1 00:26:39.526 --rc geninfo_all_blocks=1 00:26:39.526 --rc geninfo_unexecuted_blocks=1 00:26:39.526 00:26:39.526 ' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:39.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.526 --rc genhtml_branch_coverage=1 00:26:39.526 --rc genhtml_function_coverage=1 00:26:39.526 --rc genhtml_legend=1 00:26:39.526 --rc geninfo_all_blocks=1 00:26:39.526 --rc geninfo_unexecuted_blocks=1 00:26:39.526 00:26:39.526 ' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.526 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:39.527 Cannot find device "nvmf_init_br" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:39.527 Cannot find device "nvmf_init_br2" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:39.527 Cannot find device "nvmf_tgt_br" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.527 Cannot find device "nvmf_tgt_br2" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:39.527 Cannot find device "nvmf_init_br" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:39.527 Cannot find device "nvmf_init_br2" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:39.527 Cannot find device "nvmf_tgt_br" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:39.527 Cannot find device "nvmf_tgt_br2" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:39.527 Cannot find device "nvmf_br" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:39.527 Cannot find device "nvmf_init_if" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:39.527 Cannot find device "nvmf_init_if2" 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:39.527 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:39.787 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:39.787 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:39.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:39.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:26:39.787 00:26:39.787 --- 10.0.0.3 ping statistics --- 00:26:39.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.787 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:39.787 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:39.787 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:26:39.787 00:26:39.787 --- 10.0.0.4 ping statistics --- 00:26:39.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.787 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:39.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:26:39.787 00:26:39.787 --- 10.0.0.1 ping statistics --- 00:26:39.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.787 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:39.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:26:39.787 00:26:39.787 --- 10.0.0.2 ping statistics --- 00:26:39.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.787 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=108564 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 108564 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 108564 ']' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.787 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:40.046 [2024-12-16 11:46:45.260991] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:40.046 [2024-12-16 11:46:45.262438] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:26:40.046 [2024-12-16 11:46:45.262530] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.046 [2024-12-16 11:46:45.419479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.304 [2024-12-16 11:46:45.490581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.304 [2024-12-16 11:46:45.490648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.305 [2024-12-16 11:46:45.490663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.305 [2024-12-16 11:46:45.490674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.305 [2024-12-16 11:46:45.490683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.305 [2024-12-16 11:46:45.492359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:26:40.305 [2024-12-16 11:46:45.492519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:26:40.305 [2024-12-16 11:46:45.492561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:26:40.305 [2024-12-16 11:46:45.492565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.305 [2024-12-16 11:46:45.596944] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:40.305 [2024-12-16 11:46:45.597418] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:40.305 [2024-12-16 11:46:45.597605] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:40.305 [2024-12-16 11:46:45.598297] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:40.305 [2024-12-16 11:46:45.598851] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:40.305 [2024-12-16 11:46:45.682288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.305 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:40.564 Malloc0 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:40.564 [2024-12-16 11:46:45.762476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:40.564 { 00:26:40.564 "params": { 00:26:40.564 "name": "Nvme$subsystem", 00:26:40.564 "trtype": "$TEST_TRANSPORT", 00:26:40.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.564 "adrfam": "ipv4", 00:26:40.564 "trsvcid": "$NVMF_PORT", 00:26:40.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.564 "hdgst": ${hdgst:-false}, 00:26:40.564 "ddgst": ${ddgst:-false} 00:26:40.564 }, 00:26:40.564 "method": "bdev_nvme_attach_controller" 00:26:40.564 } 00:26:40.564 EOF 00:26:40.564 )") 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:26:40.564 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:40.564 "params": { 00:26:40.564 "name": "Nvme1", 00:26:40.564 "trtype": "tcp", 00:26:40.564 "traddr": "10.0.0.3", 00:26:40.564 "adrfam": "ipv4", 00:26:40.564 "trsvcid": "4420", 00:26:40.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:40.564 "hdgst": false, 00:26:40.564 "ddgst": false 00:26:40.564 }, 00:26:40.564 "method": "bdev_nvme_attach_controller" 00:26:40.564 }' 00:26:40.564 [2024-12-16 11:46:45.828583] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:26:40.564 [2024-12-16 11:46:45.828685] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108599 ] 00:26:40.823 [2024-12-16 11:46:45.981410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:40.823 [2024-12-16 11:46:46.049375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.823 [2024-12-16 11:46:46.049497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.823 [2024-12-16 11:46:46.049507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.823 I/O targets: 00:26:40.823 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:26:40.823 00:26:40.823 00:26:40.823 CUnit - A unit testing framework for C - Version 2.1-3 00:26:40.823 http://cunit.sourceforge.net/ 00:26:40.823 00:26:40.823 00:26:40.823 Suite: bdevio tests on: Nvme1n1 00:26:41.081 Test: blockdev write read block ...passed 00:26:41.081 Test: blockdev write zeroes read block ...passed 00:26:41.081 Test: blockdev write zeroes read no split ...passed 00:26:41.081 Test: blockdev write zeroes read split ...passed 00:26:41.081 Test: blockdev write zeroes read split partial ...passed 00:26:41.081 Test: blockdev reset ...[2024-12-16 11:46:46.341664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:41.081 [2024-12-16 11:46:46.341765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x712f70 (9): Bad file descriptor 00:26:41.081 passed 00:26:41.081 Test: blockdev write read 8 blocks ...[2024-12-16 11:46:46.345337] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:26:41.081 passed 00:26:41.081 Test: blockdev write read size > 128k ...passed 00:26:41.081 Test: blockdev write read invalid size ...passed 00:26:41.081 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:41.081 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:41.081 Test: blockdev write read max offset ...passed 00:26:41.081 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:41.081 Test: blockdev writev readv 8 blocks ...passed 00:26:41.081 Test: blockdev writev readv 30 x 1block ...passed 00:26:41.340 Test: blockdev writev readv block ...passed 00:26:41.340 Test: blockdev writev readv size > 128k ...passed 00:26:41.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:41.340 Test: blockdev comparev and writev ...[2024-12-16 11:46:46.518491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.518532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.518553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.518565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.518905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.518929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.518946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.518957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.519317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.519336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.519352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.519363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.519726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.519744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.519760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:41.340 [2024-12-16 11:46:46.519771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.340 passed 00:26:41.340 Test: blockdev nvme passthru rw ...passed 00:26:41.340 Test: blockdev nvme passthru vendor specific ...passed 00:26:41.340 Test: blockdev nvme admin passthru ...[2024-12-16 11:46:46.603388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:41.340 [2024-12-16 11:46:46.603438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.603592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:41.340 [2024-12-16 11:46:46.603610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.603732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:41.340 [2024-12-16 11:46:46.603748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.340 [2024-12-16 11:46:46.603872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:41.340 [2024-12-16 11:46:46.603888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.340 passed 00:26:41.340 Test: blockdev copy ...passed 00:26:41.340 00:26:41.340 Run Summary: Type Total Ran Passed Failed Inactive 00:26:41.340 suites 1 1 n/a 0 0 00:26:41.340 tests 23 23 23 0 0 00:26:41.340 asserts 152 152 152 0 n/a 00:26:41.340 00:26:41.340 Elapsed time = 0.864 seconds 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:41.599 rmmod nvme_tcp 00:26:41.599 rmmod nvme_fabrics 00:26:41.599 rmmod nvme_keyring 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 108564 ']' 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 108564 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 108564 ']' 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 108564 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108564 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:26:41.599 killing process with pid 108564 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108564' 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 108564 00:26:41.599 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 108564 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:41.858 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:26:42.116 00:26:42.116 real 0m2.901s 00:26:42.116 user 0m7.185s 00:26:42.116 sys 0m1.214s 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:42.116 ************************************ 00:26:42.116 END TEST nvmf_bdevio 00:26:42.116 ************************************ 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:42.116 00:26:42.116 real 3m36.570s 00:26:42.116 user 9m45.544s 00:26:42.116 sys 1m18.401s 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.116 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:42.116 ************************************ 00:26:42.116 END TEST nvmf_target_core_interrupt_mode 00:26:42.116 ************************************ 00:26:42.376 11:46:47 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:42.376 11:46:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:42.376 11:46:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.376 11:46:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.376 ************************************ 00:26:42.376 START TEST nvmf_interrupt 00:26:42.376 ************************************ 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:42.376 * Looking for test storage... 00:26:42.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:42.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.376 --rc genhtml_branch_coverage=1 00:26:42.376 --rc genhtml_function_coverage=1 00:26:42.376 --rc genhtml_legend=1 00:26:42.376 --rc geninfo_all_blocks=1 00:26:42.376 --rc geninfo_unexecuted_blocks=1 00:26:42.376 00:26:42.376 ' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:42.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.376 --rc genhtml_branch_coverage=1 00:26:42.376 --rc genhtml_function_coverage=1 00:26:42.376 --rc genhtml_legend=1 00:26:42.376 --rc geninfo_all_blocks=1 00:26:42.376 --rc geninfo_unexecuted_blocks=1 00:26:42.376 00:26:42.376 ' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:42.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.376 --rc genhtml_branch_coverage=1 00:26:42.376 --rc genhtml_function_coverage=1 00:26:42.376 --rc genhtml_legend=1 00:26:42.376 --rc geninfo_all_blocks=1 00:26:42.376 --rc geninfo_unexecuted_blocks=1 00:26:42.376 00:26:42.376 ' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:42.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.376 --rc genhtml_branch_coverage=1 00:26:42.376 --rc genhtml_function_coverage=1 00:26:42.376 --rc genhtml_legend=1 00:26:42.376 --rc geninfo_all_blocks=1 00:26:42.376 --rc geninfo_unexecuted_blocks=1 00:26:42.376 00:26:42.376 ' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:42.376 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:42.377 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:42.636 Cannot find device "nvmf_init_br" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:42.636 Cannot find device "nvmf_init_br2" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:42.636 Cannot find device "nvmf_tgt_br" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:42.636 Cannot find device "nvmf_tgt_br2" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:42.636 Cannot find device "nvmf_init_br" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:42.636 Cannot find device "nvmf_init_br2" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:42.636 Cannot find device "nvmf_tgt_br" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:42.636 Cannot find device "nvmf_tgt_br2" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:42.636 Cannot find device "nvmf_br" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:42.636 Cannot find device "nvmf_init_if" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:42.636 Cannot find device "nvmf_init_if2" 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:42.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:42.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:42.636 11:46:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:42.636 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:42.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:42.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:26:42.895 00:26:42.895 --- 10.0.0.3 ping statistics --- 00:26:42.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.895 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:42.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:42.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:26:42.895 00:26:42.895 --- 10.0.0.4 ping statistics --- 00:26:42.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.895 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:42.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:42.895 00:26:42.895 --- 10.0.0.1 ping statistics --- 00:26:42.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.895 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:42.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:26:42.895 00:26:42.895 --- 10.0.0.2 ping statistics --- 00:26:42.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.895 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=108850 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 108850 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 108850 ']' 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.895 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.896 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.896 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.896 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:42.896 [2024-12-16 11:46:48.251991] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:42.896 [2024-12-16 11:46:48.253369] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:26:42.896 [2024-12-16 11:46:48.254203] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.154 [2024-12-16 11:46:48.415521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:43.154 [2024-12-16 11:46:48.476976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.154 [2024-12-16 11:46:48.477081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.154 [2024-12-16 11:46:48.477096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.154 [2024-12-16 11:46:48.477107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.154 [2024-12-16 11:46:48.477116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.154 [2024-12-16 11:46:48.478291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.154 [2024-12-16 11:46:48.478305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.414 [2024-12-16 11:46:48.585555] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:43.414 [2024-12-16 11:46:48.586080] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:43.414 [2024-12-16 11:46:48.586128] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:26:43.414 5000+0 records in 00:26:43.414 5000+0 records out 00:26:43.414 10240000 bytes (10 MB, 9.8 MiB) copied, 0.033513 s, 306 MB/s 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:43.414 AIO0 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:43.414 [2024-12-16 11:46:48.743353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:43.414 [2024-12-16 11:46:48.771763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108850 0 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108850 0 idle 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:43.414 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108850 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.30 reactor_0' 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108850 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.30 reactor_0 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108850 1 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108850 1 idle 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:43.673 11:46:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108854 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1' 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108854 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=108910 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108850 0 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108850 0 busy 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108850 root 20 0 64.2g 45056 32768 S 6.7 0.4 0:00.31 reactor_0' 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108850 root 20 0 64.2g 45056 32768 S 6.7 0.4 0:00.31 reactor_0 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:43.933 11:46:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108850 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.69 reactor_0' 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108850 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.69 reactor_0 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108850 1 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108850 1 busy 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108854 root 20 0 64.2g 46336 33152 R 60.0 0.4 0:00.80 reactor_1' 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108854 root 20 0 64.2g 46336 33152 R 60.0 0.4 0:00.80 reactor_1 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:45.310 11:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 108910 00:26:55.303 Initializing NVMe Controllers 00:26:55.303 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.303 Controller IO queue size 256, less than required. 00:26:55.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:55.303 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:55.303 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:55.303 Initialization complete. Launching workers. 00:26:55.303 ======================================================== 00:26:55.303 Latency(us) 00:26:55.303 Device Information : IOPS MiB/s Average min max 00:26:55.303 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6881.20 26.88 37262.76 10146.64 252921.66 00:26:55.303 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 7026.30 27.45 36485.35 6800.21 251334.69 00:26:55.303 ======================================================== 00:26:55.303 Total : 13907.50 54.33 36870.00 6800.21 252921.66 00:26:55.303 00:26:55.303 11:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:55.303 11:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108850 0 00:26:55.303 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108850 0 idle 00:26:55.303 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:55.303 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:55.303 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:55.303 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108850 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:13.40 reactor_0' 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108850 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:13.40 reactor_0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108850 1 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108850 1 idle 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108854 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.54 reactor_1' 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108854 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.54 reactor_1 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:55.304 11:46:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108850 0 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108850 0 idle 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:56.679 11:47:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108850 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:13.46 reactor_0' 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108850 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:13.46 reactor_0 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108850 1 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108850 1 idle 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108850 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108850 -w 256 00:26:56.679 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108854 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.55 reactor_1' 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108854 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.55 reactor_1 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:56.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:26:56.937 11:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:26:56.938 11:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:26:56.938 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.938 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.196 rmmod nvme_tcp 00:26:57.196 rmmod nvme_fabrics 00:26:57.196 rmmod nvme_keyring 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 108850 ']' 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 108850 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 108850 ']' 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 108850 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108850 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.196 killing process with pid 108850 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108850' 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 108850 00:26:57.196 11:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 108850 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:57.454 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:57.713 11:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:26:57.713 00:26:57.713 real 0m15.529s 00:26:57.713 user 0m27.920s 00:26:57.713 sys 0m7.298s 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.713 ************************************ 00:26:57.713 END TEST nvmf_interrupt 00:26:57.713 ************************************ 00:26:57.713 11:47:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:57.972 ************************************ 00:26:57.972 END TEST nvmf_tcp 00:26:57.972 ************************************ 00:26:57.972 00:26:57.972 real 20m35.900s 00:26:57.972 user 54m11.808s 00:26:57.972 sys 4m58.125s 00:26:57.972 11:47:03 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.972 11:47:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:57.972 11:47:03 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:26:57.972 11:47:03 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:57.972 11:47:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:57.972 11:47:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.972 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:26:57.972 ************************************ 00:26:57.972 START TEST spdkcli_nvmf_tcp 00:26:57.972 ************************************ 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:57.972 * Looking for test storage... 00:26:57.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.972 --rc genhtml_branch_coverage=1 00:26:57.972 --rc genhtml_function_coverage=1 00:26:57.972 --rc genhtml_legend=1 00:26:57.972 --rc geninfo_all_blocks=1 00:26:57.972 --rc geninfo_unexecuted_blocks=1 00:26:57.972 00:26:57.972 ' 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.972 --rc genhtml_branch_coverage=1 00:26:57.972 --rc genhtml_function_coverage=1 00:26:57.972 --rc genhtml_legend=1 00:26:57.972 --rc geninfo_all_blocks=1 00:26:57.972 --rc geninfo_unexecuted_blocks=1 00:26:57.972 00:26:57.972 ' 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.972 --rc genhtml_branch_coverage=1 00:26:57.972 --rc genhtml_function_coverage=1 00:26:57.972 --rc genhtml_legend=1 00:26:57.972 --rc geninfo_all_blocks=1 00:26:57.972 --rc geninfo_unexecuted_blocks=1 00:26:57.972 00:26:57.972 ' 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.972 --rc genhtml_branch_coverage=1 00:26:57.972 --rc genhtml_function_coverage=1 00:26:57.972 --rc genhtml_legend=1 00:26:57.972 --rc geninfo_all_blocks=1 00:26:57.972 --rc geninfo_unexecuted_blocks=1 00:26:57.972 00:26:57.972 ' 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.972 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.232 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=109251 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 109251 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 109251 ']' 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.232 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.232 [2024-12-16 11:47:03.451482] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:26:58.232 [2024-12-16 11:47:03.451621] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109251 ] 00:26:58.232 [2024-12-16 11:47:03.600186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.491 [2024-12-16 11:47:03.667337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.491 [2024-12-16 11:47:03.667352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.491 11:47:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:58.491 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:58.491 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:58.491 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:58.491 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:58.491 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:58.491 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:58.491 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:58.491 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:58.491 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:58.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:58.491 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:58.491 ' 00:27:01.778 [2024-12-16 11:47:06.629664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.712 [2024-12-16 11:47:07.958715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:05.242 [2024-12-16 11:47:10.430126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:07.772 [2024-12-16 11:47:12.562123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:09.147 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:09.147 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:09.147 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:09.147 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:09.147 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:09.147 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:09.147 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:09.147 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:09.147 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:09.147 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:09.147 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:09.147 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:09.147 11:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.714 11:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:09.714 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:09.714 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:09.714 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:09.714 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:09.714 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:09.714 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:09.714 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:09.714 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:09.714 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:09.714 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:09.714 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:09.714 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:09.714 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:09.714 ' 00:27:16.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:16.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:16.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:16.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:16.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:16.277 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:16.277 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:16.277 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:16.277 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:16.277 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:16.277 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:16.277 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:16.277 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:16.277 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 109251 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 109251 ']' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 109251 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109251 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109251' 00:27:16.277 killing process with pid 109251 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 109251 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 109251 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 109251 ']' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 109251 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 109251 ']' 00:27:16.277 Process with pid 109251 is not found 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 109251 00:27:16.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109251) - No such process 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 109251 is not found' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:16.277 ************************************ 00:27:16.277 END TEST spdkcli_nvmf_tcp 00:27:16.277 ************************************ 00:27:16.277 00:27:16.277 real 0m17.716s 00:27:16.277 user 0m38.575s 00:27:16.277 sys 0m0.987s 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.277 11:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.277 11:47:20 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:16.277 11:47:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:16.277 11:47:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.277 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.277 ************************************ 00:27:16.277 START TEST nvmf_identify_passthru 00:27:16.277 ************************************ 00:27:16.277 11:47:20 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:16.277 * Looking for test storage... 00:27:16.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:16.277 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:16.277 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:27:16.277 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:16.277 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.277 11:47:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:27:16.278 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.278 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:16.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.278 --rc genhtml_branch_coverage=1 00:27:16.278 --rc genhtml_function_coverage=1 00:27:16.278 --rc genhtml_legend=1 00:27:16.278 --rc geninfo_all_blocks=1 00:27:16.278 --rc geninfo_unexecuted_blocks=1 00:27:16.278 00:27:16.278 ' 00:27:16.278 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:16.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.278 --rc genhtml_branch_coverage=1 00:27:16.278 --rc genhtml_function_coverage=1 00:27:16.278 --rc genhtml_legend=1 00:27:16.278 --rc geninfo_all_blocks=1 00:27:16.278 --rc geninfo_unexecuted_blocks=1 00:27:16.278 00:27:16.278 ' 00:27:16.278 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:16.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.278 --rc genhtml_branch_coverage=1 00:27:16.278 --rc genhtml_function_coverage=1 00:27:16.278 --rc genhtml_legend=1 00:27:16.278 --rc geninfo_all_blocks=1 00:27:16.278 --rc geninfo_unexecuted_blocks=1 00:27:16.278 00:27:16.278 ' 00:27:16.278 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:16.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.278 --rc genhtml_branch_coverage=1 00:27:16.278 --rc genhtml_function_coverage=1 00:27:16.278 --rc genhtml_legend=1 00:27:16.278 --rc geninfo_all_blocks=1 00:27:16.278 --rc geninfo_unexecuted_blocks=1 00:27:16.278 00:27:16.278 ' 00:27:16.278 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.278 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:16.278 11:47:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.278 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.278 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:16.278 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:16.278 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:16.279 Cannot find device "nvmf_init_br" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:16.279 Cannot find device "nvmf_init_br2" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:16.279 Cannot find device "nvmf_tgt_br" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:16.279 Cannot find device "nvmf_tgt_br2" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:16.279 Cannot find device "nvmf_init_br" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:16.279 Cannot find device "nvmf_init_br2" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:16.279 Cannot find device "nvmf_tgt_br" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:16.279 Cannot find device "nvmf_tgt_br2" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:16.279 Cannot find device "nvmf_br" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:16.279 Cannot find device "nvmf_init_if" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:16.279 Cannot find device "nvmf_init_if2" 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:16.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:16.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:16.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:16.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:27:16.279 00:27:16.279 --- 10.0.0.3 ping statistics --- 00:27:16.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.279 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:16.279 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:16.279 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:27:16.279 00:27:16.279 --- 10.0.0.4 ping statistics --- 00:27:16.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.279 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:16.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:16.279 00:27:16.279 --- 10.0.0.1 ping statistics --- 00:27:16.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.279 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:16.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:27:16.279 00:27:16.279 --- 10.0.0.2 ping statistics --- 00:27:16.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.279 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.279 11:47:21 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.279 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:16.279 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:16.279 11:47:21 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:27:16.279 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:27:16.279 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:27:16.279 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:16.279 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:16.279 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:16.538 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:27:16.538 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:16.538 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:16.538 11:47:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:16.797 11:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:27:16.797 11:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:16.797 11:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:16.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.797 11:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=109762 00:27:16.797 11:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:16.797 11:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.797 11:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 109762 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 109762 ']' 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.797 11:47:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:16.797 [2024-12-16 11:47:22.148332] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:27:16.797 [2024-12-16 11:47:22.148920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.055 [2024-12-16 11:47:22.305348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.055 [2024-12-16 11:47:22.381352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.055 [2024-12-16 11:47:22.381647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.055 [2024-12-16 11:47:22.381878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.055 [2024-12-16 11:47:22.382148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.055 [2024-12-16 11:47:22.382277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.055 [2024-12-16 11:47:22.383720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.055 [2024-12-16 11:47:22.383809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.055 [2024-12-16 11:47:22.383894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.055 [2024-12-16 11:47:22.383895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:27:17.990 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.990 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:17.990 [2024-12-16 11:47:23.318838] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.990 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:17.990 [2024-12-16 11:47:23.333225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.990 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:17.990 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.990 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:18.249 Nvme0n1 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.249 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.249 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.249 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:18.249 [2024-12-16 11:47:23.478265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.249 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:18.249 [ 00:27:18.249 { 00:27:18.249 "allow_any_host": true, 00:27:18.249 "hosts": [], 00:27:18.249 "listen_addresses": [], 00:27:18.249 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:18.249 "subtype": "Discovery" 00:27:18.249 }, 00:27:18.249 { 00:27:18.249 "allow_any_host": true, 00:27:18.249 "hosts": [], 00:27:18.249 "listen_addresses": [ 00:27:18.249 { 00:27:18.249 "adrfam": "IPv4", 00:27:18.249 "traddr": "10.0.0.3", 00:27:18.249 "trsvcid": "4420", 00:27:18.249 "trtype": "TCP" 00:27:18.249 } 00:27:18.249 ], 00:27:18.249 "max_cntlid": 65519, 00:27:18.249 "max_namespaces": 1, 00:27:18.249 "min_cntlid": 1, 00:27:18.249 "model_number": "SPDK bdev Controller", 00:27:18.249 "namespaces": [ 00:27:18.249 { 00:27:18.249 "bdev_name": "Nvme0n1", 00:27:18.249 "name": "Nvme0n1", 00:27:18.249 "nguid": "EA5E8E9023C74003A4C05FF4639CD965", 00:27:18.249 "nsid": 1, 00:27:18.249 "uuid": "ea5e8e90-23c7-4003-a4c0-5ff4639cd965" 00:27:18.249 } 00:27:18.249 ], 00:27:18.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.249 "serial_number": "SPDK00000000000001", 00:27:18.249 "subtype": "NVMe" 00:27:18.249 } 00:27:18.249 ] 00:27:18.249 11:47:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.249 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:18.249 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.249 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:18.508 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:27:18.508 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.508 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:18.508 11:47:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:18.766 11:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:27:18.766 11:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:27:18.766 11:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:27:18.766 11:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.766 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.766 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:18.766 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.766 11:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:18.766 11:47:24 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:18.766 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.766 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:27:18.766 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.766 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:27:18.766 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.766 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.766 rmmod nvme_tcp 00:27:18.766 rmmod nvme_fabrics 00:27:18.766 rmmod nvme_keyring 00:27:18.766 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.767 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:27:18.767 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:27:18.767 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 109762 ']' 00:27:18.767 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 109762 00:27:18.767 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 109762 ']' 00:27:18.767 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 109762 00:27:18.767 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:27:18.767 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.767 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109762 00:27:19.025 killing process with pid 109762 00:27:19.025 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.025 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.025 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109762' 00:27:19.025 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 109762 00:27:19.025 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 109762 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:19.025 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:19.283 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:19.283 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:19.283 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.284 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:19.284 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.284 11:47:24 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:27:19.284 00:27:19.284 real 0m3.690s 00:27:19.284 user 0m8.725s 00:27:19.284 sys 0m0.957s 00:27:19.284 ************************************ 00:27:19.284 END TEST nvmf_identify_passthru 00:27:19.284 ************************************ 00:27:19.284 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.284 11:47:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:19.284 11:47:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:19.284 11:47:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:19.284 11:47:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.284 11:47:24 -- common/autotest_common.sh@10 -- # set +x 00:27:19.284 ************************************ 00:27:19.284 START TEST nvmf_dif 00:27:19.284 ************************************ 00:27:19.284 11:47:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:19.548 * Looking for test storage... 00:27:19.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.548 --rc genhtml_branch_coverage=1 00:27:19.548 --rc genhtml_function_coverage=1 00:27:19.548 --rc genhtml_legend=1 00:27:19.548 --rc geninfo_all_blocks=1 00:27:19.548 --rc geninfo_unexecuted_blocks=1 00:27:19.548 00:27:19.548 ' 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.548 --rc genhtml_branch_coverage=1 00:27:19.548 --rc genhtml_function_coverage=1 00:27:19.548 --rc genhtml_legend=1 00:27:19.548 --rc geninfo_all_blocks=1 00:27:19.548 --rc geninfo_unexecuted_blocks=1 00:27:19.548 00:27:19.548 ' 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.548 --rc genhtml_branch_coverage=1 00:27:19.548 --rc genhtml_function_coverage=1 00:27:19.548 --rc genhtml_legend=1 00:27:19.548 --rc geninfo_all_blocks=1 00:27:19.548 --rc geninfo_unexecuted_blocks=1 00:27:19.548 00:27:19.548 ' 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.548 --rc genhtml_branch_coverage=1 00:27:19.548 --rc genhtml_function_coverage=1 00:27:19.548 --rc genhtml_legend=1 00:27:19.548 --rc geninfo_all_blocks=1 00:27:19.548 --rc geninfo_unexecuted_blocks=1 00:27:19.548 00:27:19.548 ' 00:27:19.548 11:47:24 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.548 11:47:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.548 11:47:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 11:47:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 11:47:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 11:47:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:19.548 11:47:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.548 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.548 11:47:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:19.548 11:47:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:19.548 11:47:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:19.548 11:47:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:19.548 11:47:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:19.548 11:47:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:19.548 11:47:24 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:19.549 Cannot find device "nvmf_init_br" 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:19.549 Cannot find device "nvmf_init_br2" 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:19.549 Cannot find device "nvmf_tgt_br" 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:19.549 Cannot find device "nvmf_tgt_br2" 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:19.549 11:47:24 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:19.549 Cannot find device "nvmf_init_br" 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:19.807 Cannot find device "nvmf_init_br2" 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:19.807 Cannot find device "nvmf_tgt_br" 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:19.807 Cannot find device "nvmf_tgt_br2" 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:19.807 11:47:24 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:19.807 Cannot find device "nvmf_br" 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:19.807 Cannot find device "nvmf_init_if" 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:19.807 Cannot find device "nvmf_init_if2" 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:19.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:19.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:19.807 11:47:25 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:20.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:20.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:27:20.065 00:27:20.065 --- 10.0.0.3 ping statistics --- 00:27:20.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.065 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:20.065 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:20.065 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:27:20.065 00:27:20.065 --- 10.0.0.4 ping statistics --- 00:27:20.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.065 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:20.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:27:20.065 00:27:20.065 --- 10.0.0.1 ping statistics --- 00:27:20.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.065 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:20.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:27:20.065 00:27:20.065 --- 10.0.0.2 ping statistics --- 00:27:20.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.065 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:20.065 11:47:25 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:20.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:20.324 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:20.324 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:20.324 11:47:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:20.324 11:47:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=110158 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 110158 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 110158 ']' 00:27:20.324 11:47:25 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.324 11:47:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.582 [2024-12-16 11:47:25.763403] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:27:20.582 [2024-12-16 11:47:25.763482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.582 [2024-12-16 11:47:25.914727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.582 [2024-12-16 11:47:25.980774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.582 [2024-12-16 11:47:25.981088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.582 [2024-12-16 11:47:25.981128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.582 [2024-12-16 11:47:25.981139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.582 [2024-12-16 11:47:25.981148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.582 [2024-12-16 11:47:25.981605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:27:20.841 11:47:26 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.841 11:47:26 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.841 11:47:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:20.841 11:47:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.841 [2024-12-16 11:47:26.165812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.841 11:47:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.841 11:47:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.841 ************************************ 00:27:20.841 START TEST fio_dif_1_default 00:27:20.841 ************************************ 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.841 bdev_null0 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.841 [2024-12-16 11:47:26.213957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:20.841 { 00:27:20.841 "params": { 00:27:20.841 "name": "Nvme$subsystem", 00:27:20.841 "trtype": "$TEST_TRANSPORT", 00:27:20.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.841 "adrfam": "ipv4", 00:27:20.841 "trsvcid": "$NVMF_PORT", 00:27:20.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.841 "hdgst": ${hdgst:-false}, 00:27:20.841 "ddgst": ${ddgst:-false} 00:27:20.841 }, 00:27:20.841 "method": "bdev_nvme_attach_controller" 00:27:20.841 } 00:27:20.841 EOF 00:27:20.841 )") 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:20.841 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:27:20.842 11:47:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:20.842 "params": { 00:27:20.842 "name": "Nvme0", 00:27:20.842 "trtype": "tcp", 00:27:20.842 "traddr": "10.0.0.3", 00:27:20.842 "adrfam": "ipv4", 00:27:20.842 "trsvcid": "4420", 00:27:20.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:20.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:20.842 "hdgst": false, 00:27:20.842 "ddgst": false 00:27:20.842 }, 00:27:20.842 "method": "bdev_nvme_attach_controller" 00:27:20.842 }' 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:21.100 11:47:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.100 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:21.100 fio-3.35 00:27:21.100 Starting 1 thread 00:27:33.294 00:27:33.294 filename0: (groupid=0, jobs=1): err= 0: pid=110235: Mon Dec 16 11:47:37 2024 00:27:33.294 read: IOPS=4250, BW=16.6MiB/s (17.4MB/s)(166MiB/10026msec) 00:27:33.294 slat (usec): min=6, max=1546, avg= 8.85, stdev= 8.53 00:27:33.294 clat (usec): min=411, max=42877, avg=914.68, stdev=4085.95 00:27:33.294 lat (usec): min=417, max=42887, avg=923.53, stdev=4086.07 00:27:33.294 clat percentiles (usec): 00:27:33.294 | 1.00th=[ 445], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 482], 00:27:33.294 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 498], 60.00th=[ 502], 00:27:33.294 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 545], 00:27:33.294 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:27:33.294 | 99.99th=[42730] 00:27:33.294 bw ( KiB/s): min= 5760, max=28000, per=100.00%, avg=17043.20, stdev=7147.55, samples=20 00:27:33.294 iops : min= 1440, max= 7000, avg=4260.80, stdev=1786.89, samples=20 00:27:33.294 lat (usec) : 500=57.96%, 750=40.90%, 1000=0.06% 00:27:33.294 lat (msec) : 2=0.03%, 4=0.02%, 50=1.02% 00:27:33.294 cpu : usr=86.85%, sys=10.59%, ctx=111, majf=0, minf=9 00:27:33.294 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:33.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.294 issued rwts: total=42612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.294 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:33.294 00:27:33.294 Run status group 0 (all jobs): 00:27:33.294 READ: bw=16.6MiB/s (17.4MB/s), 16.6MiB/s-16.6MiB/s (17.4MB/s-17.4MB/s), io=166MiB (175MB), run=10026-10026msec 00:27:33.294 11:47:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 ************************************ 00:27:33.295 END TEST fio_dif_1_default 00:27:33.295 ************************************ 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 00:27:33.295 real 0m11.164s 00:27:33.295 user 0m9.473s 00:27:33.295 sys 0m1.351s 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 11:47:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:33.295 11:47:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:33.295 11:47:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 ************************************ 00:27:33.295 START TEST fio_dif_1_multi_subsystems 00:27:33.295 ************************************ 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 bdev_null0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 [2024-12-16 11:47:37.429501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 bdev_null1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.295 { 00:27:33.295 "params": { 00:27:33.295 "name": "Nvme$subsystem", 00:27:33.295 "trtype": "$TEST_TRANSPORT", 00:27:33.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.295 "adrfam": "ipv4", 00:27:33.295 "trsvcid": "$NVMF_PORT", 00:27:33.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.295 "hdgst": ${hdgst:-false}, 00:27:33.295 "ddgst": ${ddgst:-false} 00:27:33.295 }, 00:27:33.295 "method": "bdev_nvme_attach_controller" 00:27:33.295 } 00:27:33.295 EOF 00:27:33.295 )") 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.295 { 00:27:33.295 "params": { 00:27:33.295 "name": "Nvme$subsystem", 00:27:33.295 "trtype": "$TEST_TRANSPORT", 00:27:33.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.295 "adrfam": "ipv4", 00:27:33.295 "trsvcid": "$NVMF_PORT", 00:27:33.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.295 "hdgst": ${hdgst:-false}, 00:27:33.295 "ddgst": ${ddgst:-false} 00:27:33.295 }, 00:27:33.295 "method": "bdev_nvme_attach_controller" 00:27:33.295 } 00:27:33.295 EOF 00:27:33.295 )") 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:27:33.295 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:33.295 "params": { 00:27:33.295 "name": "Nvme0", 00:27:33.295 "trtype": "tcp", 00:27:33.295 "traddr": "10.0.0.3", 00:27:33.295 "adrfam": "ipv4", 00:27:33.295 "trsvcid": "4420", 00:27:33.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.296 "hdgst": false, 00:27:33.296 "ddgst": false 00:27:33.296 }, 00:27:33.296 "method": "bdev_nvme_attach_controller" 00:27:33.296 },{ 00:27:33.296 "params": { 00:27:33.296 "name": "Nvme1", 00:27:33.296 "trtype": "tcp", 00:27:33.296 "traddr": "10.0.0.3", 00:27:33.296 "adrfam": "ipv4", 00:27:33.296 "trsvcid": "4420", 00:27:33.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.296 "hdgst": false, 00:27:33.296 "ddgst": false 00:27:33.296 }, 00:27:33.296 "method": "bdev_nvme_attach_controller" 00:27:33.296 }' 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:33.296 11:47:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.296 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:33.296 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:33.296 fio-3.35 00:27:33.296 Starting 2 threads 00:27:43.269 00:27:43.269 filename0: (groupid=0, jobs=1): err= 0: pid=110391: Mon Dec 16 11:47:48 2024 00:27:43.269 read: IOPS=270, BW=1080KiB/s (1106kB/s)(10.6MiB/10013msec) 00:27:43.269 slat (usec): min=6, max=109, avg= 9.67, stdev= 5.10 00:27:43.269 clat (usec): min=410, max=42866, avg=14782.48, stdev=19377.91 00:27:43.269 lat (usec): min=418, max=42882, avg=14792.15, stdev=19378.10 00:27:43.269 clat percentiles (usec): 00:27:43.269 | 1.00th=[ 441], 5.00th=[ 469], 10.00th=[ 482], 20.00th=[ 502], 00:27:43.269 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 783], 00:27:43.269 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:27:43.269 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:43.269 | 99.99th=[42730] 00:27:43.269 bw ( KiB/s): min= 544, max= 1664, per=47.89%, avg=1080.00, stdev=276.81, samples=20 00:27:43.269 iops : min= 136, max= 416, avg=270.00, stdev=69.20, samples=20 00:27:43.269 lat (usec) : 500=20.12%, 750=39.53%, 1000=4.59% 00:27:43.269 lat (msec) : 2=0.70%, 50=35.06% 00:27:43.269 cpu : usr=95.15%, sys=4.29%, ctx=194, majf=0, minf=0 00:27:43.269 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:43.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.269 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.269 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:43.269 filename1: (groupid=0, jobs=1): err= 0: pid=110392: Mon Dec 16 11:47:48 2024 00:27:43.269 read: IOPS=293, BW=1174KiB/s (1202kB/s)(11.5MiB/10001msec) 00:27:43.269 slat (nsec): min=6606, max=51166, avg=9627.20, stdev=4024.85 00:27:43.269 clat (usec): min=402, max=42460, avg=13595.59, stdev=18949.10 00:27:43.269 lat (usec): min=410, max=42471, avg=13605.22, stdev=18949.09 00:27:43.269 clat percentiles (usec): 00:27:43.269 | 1.00th=[ 445], 5.00th=[ 469], 10.00th=[ 482], 20.00th=[ 502], 00:27:43.269 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 619], 00:27:43.269 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:27:43.269 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:43.269 | 99.99th=[42206] 00:27:43.269 bw ( KiB/s): min= 736, max= 1600, per=52.24%, avg=1177.26, stdev=277.25, samples=19 00:27:43.269 iops : min= 184, max= 400, avg=294.32, stdev=69.31, samples=19 00:27:43.269 lat (usec) : 500=20.03%, 750=42.23%, 1000=5.18% 00:27:43.269 lat (msec) : 2=0.41%, 50=32.15% 00:27:43.269 cpu : usr=94.95%, sys=4.53%, ctx=16, majf=0, minf=9 00:27:43.269 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:43.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.269 issued rwts: total=2936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.269 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:43.269 00:27:43.269 Run status group 0 (all jobs): 00:27:43.269 READ: bw=2253KiB/s (2307kB/s), 1080KiB/s-1174KiB/s (1106kB/s-1202kB/s), io=22.0MiB (23.1MB), run=10001-10013msec 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.269 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 00:27:43.528 real 0m11.312s 00:27:43.528 user 0m19.919s 00:27:43.528 sys 0m1.220s 00:27:43.528 ************************************ 00:27:43.528 END TEST fio_dif_1_multi_subsystems 00:27:43.528 ************************************ 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 11:47:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:43.528 11:47:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:43.528 11:47:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 ************************************ 00:27:43.528 START TEST fio_dif_rand_params 00:27:43.528 ************************************ 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 bdev_null0 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:43.528 [2024-12-16 11:47:48.794595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:43.528 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:43.529 { 00:27:43.529 "params": { 00:27:43.529 "name": "Nvme$subsystem", 00:27:43.529 "trtype": "$TEST_TRANSPORT", 00:27:43.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.529 "adrfam": "ipv4", 00:27:43.529 "trsvcid": "$NVMF_PORT", 00:27:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.529 "hdgst": ${hdgst:-false}, 00:27:43.529 "ddgst": ${ddgst:-false} 00:27:43.529 }, 00:27:43.529 "method": "bdev_nvme_attach_controller" 00:27:43.529 } 00:27:43.529 EOF 00:27:43.529 )") 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:43.529 "params": { 00:27:43.529 "name": "Nvme0", 00:27:43.529 "trtype": "tcp", 00:27:43.529 "traddr": "10.0.0.3", 00:27:43.529 "adrfam": "ipv4", 00:27:43.529 "trsvcid": "4420", 00:27:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.529 "hdgst": false, 00:27:43.529 "ddgst": false 00:27:43.529 }, 00:27:43.529 "method": "bdev_nvme_attach_controller" 00:27:43.529 }' 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:43.529 11:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.787 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:43.787 ... 00:27:43.787 fio-3.35 00:27:43.787 Starting 3 threads 00:27:50.352 00:27:50.352 filename0: (groupid=0, jobs=1): err= 0: pid=110548: Mon Dec 16 11:47:54 2024 00:27:50.352 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5004msec) 00:27:50.352 slat (nsec): min=5378, max=73286, avg=13235.48, stdev=4521.80 00:27:50.352 clat (usec): min=6596, max=52924, avg=11952.46, stdev=4926.47 00:27:50.352 lat (usec): min=6607, max=52938, avg=11965.69, stdev=4926.58 00:27:50.352 clat percentiles (usec): 00:27:50.352 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10683], 00:27:50.352 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:27:50.352 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:27:50.352 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:27:50.352 | 99.99th=[52691] 00:27:50.352 bw ( KiB/s): min=27904, max=34816, per=36.34%, avg=32057.30, stdev=2272.45, samples=10 00:27:50.352 iops : min= 218, max= 272, avg=250.40, stdev=17.78, samples=10 00:27:50.352 lat (msec) : 10=7.26%, 20=91.31%, 50=0.08%, 100=1.36% 00:27:50.352 cpu : usr=91.90%, sys=6.34%, ctx=23, majf=0, minf=0 00:27:50.352 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.352 issued rwts: total=1254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.352 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.352 filename0: (groupid=0, jobs=1): err= 0: pid=110549: Mon Dec 16 11:47:54 2024 00:27:50.352 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(123MiB/5005msec) 00:27:50.352 slat (nsec): min=5374, max=46683, avg=11064.62, stdev=4529.31 00:27:50.352 clat (usec): min=4514, max=18360, avg=15277.48, stdev=2278.36 00:27:50.352 lat (usec): min=4522, max=18374, avg=15288.55, stdev=2278.55 00:27:50.352 clat percentiles (usec): 00:27:50.352 | 1.00th=[ 4621], 5.00th=[ 9896], 10.00th=[12518], 20.00th=[14877], 00:27:50.352 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:27:50.352 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17171], 95.00th=[17433], 00:27:50.352 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:27:50.352 | 99.99th=[18482] 00:27:50.352 bw ( KiB/s): min=22784, max=27703, per=28.39%, avg=25042.30, stdev=1732.13, samples=10 00:27:50.352 iops : min= 178, max= 216, avg=195.60, stdev=13.46, samples=10 00:27:50.352 lat (msec) : 10=6.32%, 20=93.68% 00:27:50.352 cpu : usr=92.67%, sys=5.84%, ctx=61, majf=0, minf=0 00:27:50.352 IO depths : 1=28.4%, 2=71.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.352 issued rwts: total=981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.352 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.352 filename0: (groupid=0, jobs=1): err= 0: pid=110550: Mon Dec 16 11:47:54 2024 00:27:50.352 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(152MiB/5005msec) 00:27:50.352 slat (nsec): min=7302, max=45266, avg=12279.33, stdev=3580.31 00:27:50.352 clat (usec): min=6281, max=54030, avg=12347.88, stdev=3091.84 00:27:50.352 lat (usec): min=6293, max=54043, avg=12360.16, stdev=3091.78 00:27:50.352 clat percentiles (usec): 00:27:50.352 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[10552], 20.00th=[11338], 00:27:50.352 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12649], 00:27:50.352 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:27:50.352 | 99.00th=[15008], 99.50th=[15533], 99.90th=[52691], 99.95th=[54264], 00:27:50.352 | 99.99th=[54264] 00:27:50.352 bw ( KiB/s): min=27392, max=34560, per=35.18%, avg=31027.20, stdev=2075.55, samples=10 00:27:50.352 iops : min= 214, max= 270, avg=242.40, stdev=16.22, samples=10 00:27:50.352 lat (msec) : 10=7.50%, 20=92.01%, 50=0.25%, 100=0.25% 00:27:50.352 cpu : usr=92.09%, sys=6.24%, ctx=23, majf=0, minf=0 00:27:50.352 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.352 issued rwts: total=1214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.352 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.352 00:27:50.352 Run status group 0 (all jobs): 00:27:50.352 READ: bw=86.1MiB/s (90.3MB/s), 24.5MiB/s-31.3MiB/s (25.7MB/s-32.8MB/s), io=431MiB (452MB), run=5004-5005msec 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:50.352 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 bdev_null0 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 [2024-12-16 11:47:54.946837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 bdev_null1 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 bdev_null2 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:50.353 { 00:27:50.353 "params": { 00:27:50.353 "name": "Nvme$subsystem", 00:27:50.353 "trtype": "$TEST_TRANSPORT", 00:27:50.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.353 "adrfam": "ipv4", 00:27:50.353 "trsvcid": "$NVMF_PORT", 00:27:50.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.353 "hdgst": ${hdgst:-false}, 00:27:50.353 "ddgst": ${ddgst:-false} 00:27:50.353 }, 00:27:50.353 "method": "bdev_nvme_attach_controller" 00:27:50.353 } 00:27:50.353 EOF 00:27:50.353 )") 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:50.353 { 00:27:50.353 "params": { 00:27:50.353 "name": "Nvme$subsystem", 00:27:50.353 "trtype": "$TEST_TRANSPORT", 00:27:50.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.353 "adrfam": "ipv4", 00:27:50.353 "trsvcid": "$NVMF_PORT", 00:27:50.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.353 "hdgst": ${hdgst:-false}, 00:27:50.353 "ddgst": ${ddgst:-false} 00:27:50.353 }, 00:27:50.353 "method": "bdev_nvme_attach_controller" 00:27:50.353 } 00:27:50.353 EOF 00:27:50.353 )") 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:50.353 { 00:27:50.353 "params": { 00:27:50.353 "name": "Nvme$subsystem", 00:27:50.353 "trtype": "$TEST_TRANSPORT", 00:27:50.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.353 "adrfam": "ipv4", 00:27:50.353 "trsvcid": "$NVMF_PORT", 00:27:50.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.353 "hdgst": ${hdgst:-false}, 00:27:50.353 "ddgst": ${ddgst:-false} 00:27:50.353 }, 00:27:50.353 "method": "bdev_nvme_attach_controller" 00:27:50.353 } 00:27:50.353 EOF 00:27:50.353 )") 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:50.353 11:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:50.353 "params": { 00:27:50.353 "name": "Nvme0", 00:27:50.353 "trtype": "tcp", 00:27:50.353 "traddr": "10.0.0.3", 00:27:50.353 "adrfam": "ipv4", 00:27:50.353 "trsvcid": "4420", 00:27:50.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.353 "hdgst": false, 00:27:50.353 "ddgst": false 00:27:50.353 }, 00:27:50.354 "method": "bdev_nvme_attach_controller" 00:27:50.354 },{ 00:27:50.354 "params": { 00:27:50.354 "name": "Nvme1", 00:27:50.354 "trtype": "tcp", 00:27:50.354 "traddr": "10.0.0.3", 00:27:50.354 "adrfam": "ipv4", 00:27:50.354 "trsvcid": "4420", 00:27:50.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.354 "hdgst": false, 00:27:50.354 "ddgst": false 00:27:50.354 }, 00:27:50.354 "method": "bdev_nvme_attach_controller" 00:27:50.354 },{ 00:27:50.354 "params": { 00:27:50.354 "name": "Nvme2", 00:27:50.354 "trtype": "tcp", 00:27:50.354 "traddr": "10.0.0.3", 00:27:50.354 "adrfam": "ipv4", 00:27:50.354 "trsvcid": "4420", 00:27:50.354 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:50.354 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:50.354 "hdgst": false, 00:27:50.354 "ddgst": false 00:27:50.354 }, 00:27:50.354 "method": "bdev_nvme_attach_controller" 00:27:50.354 }' 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:50.354 11:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.354 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:50.354 ... 00:27:50.354 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:50.354 ... 00:27:50.354 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:50.354 ... 00:27:50.354 fio-3.35 00:27:50.354 Starting 24 threads 00:28:02.560 00:28:02.560 filename0: (groupid=0, jobs=1): err= 0: pid=110645: Mon Dec 16 11:48:06 2024 00:28:02.560 read: IOPS=245, BW=982KiB/s (1005kB/s)(9848KiB/10033msec) 00:28:02.560 slat (usec): min=7, max=8033, avg=18.62, stdev=229.42 00:28:02.560 clat (msec): min=9, max=199, avg=64.98, stdev=23.35 00:28:02.560 lat (msec): min=9, max=199, avg=65.00, stdev=23.35 00:28:02.560 clat percentiles (msec): 00:28:02.560 | 1.00th=[ 15], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:28:02.560 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:28:02.560 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 102], 00:28:02.560 | 99.00th=[ 138], 99.50th=[ 167], 99.90th=[ 201], 99.95th=[ 201], 00:28:02.560 | 99.99th=[ 201] 00:28:02.560 bw ( KiB/s): min= 688, max= 1552, per=4.75%, avg=977.95, stdev=200.51, samples=20 00:28:02.560 iops : min= 172, max= 388, avg=244.45, stdev=50.11, samples=20 00:28:02.560 lat (msec) : 10=0.65%, 20=1.95%, 50=24.53%, 100=66.98%, 250=5.89% 00:28:02.560 cpu : usr=41.40%, sys=0.84%, ctx=1207, majf=0, minf=9 00:28:02.560 IO depths : 1=0.6%, 2=1.4%, 4=6.9%, 8=77.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:28:02.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.560 complete : 0=0.0%, 4=89.4%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.560 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.560 filename0: (groupid=0, jobs=1): err= 0: pid=110646: Mon Dec 16 11:48:06 2024 00:28:02.560 read: IOPS=253, BW=1013KiB/s (1037kB/s)(9.93MiB/10037msec) 00:28:02.560 slat (usec): min=4, max=8020, avg=18.33, stdev=225.33 00:28:02.560 clat (usec): min=1712, max=194104, avg=62953.75, stdev=25730.04 00:28:02.560 lat (usec): min=1721, max=194119, avg=62972.08, stdev=25734.88 00:28:02.560 clat percentiles (msec): 00:28:02.560 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 41], 20.00th=[ 48], 00:28:02.560 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 69], 00:28:02.560 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 108], 00:28:02.560 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 194], 99.95th=[ 194], 00:28:02.560 | 99.99th=[ 194] 00:28:02.560 bw ( KiB/s): min= 552, max= 2432, per=4.92%, avg=1012.80, stdev=371.66, samples=20 00:28:02.560 iops : min= 138, max= 608, avg=253.20, stdev=92.91, samples=20 00:28:02.560 lat (msec) : 2=0.87%, 4=2.28%, 10=1.89%, 20=1.89%, 50=21.48% 00:28:02.560 lat (msec) : 100=65.11%, 250=6.49% 00:28:02.560 cpu : usr=36.31%, sys=0.84%, ctx=1035, majf=0, minf=0 00:28:02.560 IO depths : 1=1.1%, 2=2.3%, 4=9.6%, 8=74.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:02.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.560 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 issued rwts: total=2542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.561 filename0: (groupid=0, jobs=1): err= 0: pid=110647: Mon Dec 16 11:48:06 2024 00:28:02.561 read: IOPS=259, BW=1037KiB/s (1062kB/s)(10.2MiB/10027msec) 00:28:02.561 slat (usec): min=4, max=4022, avg=12.70, stdev=78.85 00:28:02.561 clat (msec): min=4, max=244, avg=61.53, stdev=24.28 00:28:02.561 lat (msec): min=4, max=244, avg=61.55, stdev=24.28 00:28:02.561 clat percentiles (msec): 00:28:02.561 | 1.00th=[ 6], 5.00th=[ 34], 10.00th=[ 45], 20.00th=[ 48], 00:28:02.561 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 64], 00:28:02.561 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 88], 95.00th=[ 97], 00:28:02.561 | 99.00th=[ 117], 99.50th=[ 245], 99.90th=[ 245], 99.95th=[ 245], 00:28:02.561 | 99.99th=[ 245] 00:28:02.561 bw ( KiB/s): min= 384, max= 1825, per=5.04%, avg=1036.45, stdev=261.01, samples=20 00:28:02.561 iops : min= 96, max= 456, avg=259.10, stdev=65.21, samples=20 00:28:02.561 lat (msec) : 10=1.85%, 20=1.85%, 50=26.36%, 100=65.72%, 250=4.23% 00:28:02.561 cpu : usr=44.54%, sys=0.80%, ctx=1261, majf=0, minf=0 00:28:02.561 IO depths : 1=1.3%, 2=3.2%, 4=11.2%, 8=72.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:28:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.561 filename0: (groupid=0, jobs=1): err= 0: pid=110648: Mon Dec 16 11:48:06 2024 00:28:02.561 read: IOPS=198, BW=796KiB/s (815kB/s)(7968KiB/10014msec) 00:28:02.561 slat (usec): min=4, max=8037, avg=23.84, stdev=290.75 00:28:02.561 clat (msec): min=15, max=226, avg=80.29, stdev=24.23 00:28:02.561 lat (msec): min=15, max=226, avg=80.31, stdev=24.24 00:28:02.561 clat percentiles (msec): 00:28:02.561 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 64], 00:28:02.561 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:28:02.561 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 129], 00:28:02.561 | 99.00th=[ 148], 99.50th=[ 171], 99.90th=[ 228], 99.95th=[ 228], 00:28:02.561 | 99.99th=[ 228] 00:28:02.561 bw ( KiB/s): min= 512, max= 944, per=3.81%, avg=784.84, stdev=100.70, samples=19 00:28:02.561 iops : min= 128, max= 236, avg=196.21, stdev=25.17, samples=19 00:28:02.561 lat (msec) : 20=0.80%, 50=9.99%, 100=73.74%, 250=15.46% 00:28:02.561 cpu : usr=34.09%, sys=0.75%, ctx=955, majf=0, minf=9 00:28:02.561 IO depths : 1=2.0%, 2=4.8%, 4=14.1%, 8=68.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:28:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.561 filename0: (groupid=0, jobs=1): err= 0: pid=110649: Mon Dec 16 11:48:06 2024 00:28:02.561 read: IOPS=201, BW=805KiB/s (825kB/s)(8068KiB/10018msec) 00:28:02.561 slat (nsec): min=4981, max=99887, avg=11273.03, stdev=5081.57 00:28:02.561 clat (msec): min=22, max=227, avg=79.33, stdev=25.88 00:28:02.561 lat (msec): min=22, max=227, avg=79.34, stdev=25.88 00:28:02.561 clat percentiles (msec): 00:28:02.561 | 1.00th=[ 28], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:28:02.561 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:28:02.561 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:28:02.561 | 99.00th=[ 146], 99.50th=[ 165], 99.90th=[ 228], 99.95th=[ 228], 00:28:02.561 | 99.99th=[ 228] 00:28:02.561 bw ( KiB/s): min= 592, max= 1072, per=3.89%, avg=801.47, stdev=116.11, samples=19 00:28:02.561 iops : min= 148, max= 268, avg=200.32, stdev=29.05, samples=19 00:28:02.561 lat (msec) : 50=17.20%, 100=64.15%, 250=18.64% 00:28:02.561 cpu : usr=32.47%, sys=0.77%, ctx=861, majf=0, minf=9 00:28:02.561 IO depths : 1=1.3%, 2=3.6%, 4=13.2%, 8=70.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:28:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.561 filename0: (groupid=0, jobs=1): err= 0: pid=110650: Mon Dec 16 11:48:06 2024 00:28:02.561 read: IOPS=218, BW=872KiB/s (893kB/s)(8732KiB/10009msec) 00:28:02.561 slat (usec): min=5, max=4033, avg=17.08, stdev=148.94 00:28:02.561 clat (msec): min=31, max=223, avg=73.26, stdev=24.13 00:28:02.561 lat (msec): min=31, max=223, avg=73.27, stdev=24.13 00:28:02.561 clat percentiles (msec): 00:28:02.561 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:28:02.561 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:28:02.561 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 99], 95.00th=[ 113], 00:28:02.561 | 99.00th=[ 146], 99.50th=[ 224], 99.90th=[ 224], 99.95th=[ 224], 00:28:02.561 | 99.99th=[ 224] 00:28:02.561 bw ( KiB/s): min= 512, max= 1120, per=4.24%, avg=872.00, stdev=142.93, samples=19 00:28:02.561 iops : min= 128, max= 280, avg=218.00, stdev=35.73, samples=19 00:28:02.561 lat (msec) : 50=13.15%, 100=77.51%, 250=9.34% 00:28:02.561 cpu : usr=40.24%, sys=0.85%, ctx=1353, majf=0, minf=9 00:28:02.561 IO depths : 1=1.3%, 2=3.0%, 4=10.7%, 8=72.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 issued rwts: total=2183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.561 filename0: (groupid=0, jobs=1): err= 0: pid=110651: Mon Dec 16 11:48:06 2024 00:28:02.561 read: IOPS=205, BW=820KiB/s (840kB/s)(8212KiB/10013msec) 00:28:02.561 slat (usec): min=3, max=8022, avg=15.34, stdev=176.87 00:28:02.561 clat (msec): min=13, max=227, avg=77.90, stdev=27.07 00:28:02.561 lat (msec): min=13, max=227, avg=77.91, stdev=27.08 00:28:02.561 clat percentiles (msec): 00:28:02.561 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:28:02.561 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 83], 00:28:02.561 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 126], 00:28:02.561 | 99.00th=[ 157], 99.50th=[ 228], 99.90th=[ 228], 99.95th=[ 228], 00:28:02.561 | 99.99th=[ 228] 00:28:02.561 bw ( KiB/s): min= 424, max= 1152, per=3.96%, avg=814.84, stdev=165.26, samples=19 00:28:02.561 iops : min= 106, max= 288, avg=203.68, stdev=41.35, samples=19 00:28:02.561 lat (msec) : 20=0.78%, 50=15.00%, 100=66.63%, 250=17.58% 00:28:02.561 cpu : usr=32.36%, sys=0.66%, ctx=856, majf=0, minf=9 00:28:02.561 IO depths : 1=1.5%, 2=3.3%, 4=12.0%, 8=71.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:28:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 complete : 0=0.0%, 4=90.0%, 8=4.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.561 filename0: (groupid=0, jobs=1): err= 0: pid=110652: Mon Dec 16 11:48:06 2024 00:28:02.561 read: IOPS=198, BW=793KiB/s (812kB/s)(7932KiB/10001msec) 00:28:02.561 slat (usec): min=5, max=4025, avg=15.35, stdev=90.43 00:28:02.561 clat (msec): min=12, max=223, avg=80.59, stdev=24.02 00:28:02.561 lat (msec): min=12, max=223, avg=80.60, stdev=24.02 00:28:02.561 clat percentiles (msec): 00:28:02.561 | 1.00th=[ 17], 5.00th=[ 50], 10.00th=[ 55], 20.00th=[ 68], 00:28:02.561 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 82], 00:28:02.561 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 117], 00:28:02.561 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 224], 99.95th=[ 224], 00:28:02.561 | 99.99th=[ 224] 00:28:02.561 bw ( KiB/s): min= 512, max= 1024, per=3.76%, avg=774.32, stdev=99.19, samples=19 00:28:02.561 iops : min= 128, max= 256, avg=193.58, stdev=24.80, samples=19 00:28:02.561 lat (msec) : 20=1.61%, 50=4.24%, 100=78.77%, 250=15.38% 00:28:02.561 cpu : usr=41.11%, sys=0.94%, ctx=1233, majf=0, minf=9 00:28:02.561 IO depths : 1=2.5%, 2=5.6%, 4=15.6%, 8=65.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:28:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.561 filename1: (groupid=0, jobs=1): err= 0: pid=110653: Mon Dec 16 11:48:06 2024 00:28:02.561 read: IOPS=244, BW=979KiB/s (1002kB/s)(9832KiB/10044msec) 00:28:02.561 slat (usec): min=7, max=4158, avg=13.51, stdev=83.88 00:28:02.561 clat (msec): min=9, max=199, avg=65.28, stdev=23.87 00:28:02.561 lat (msec): min=9, max=199, avg=65.30, stdev=23.88 00:28:02.561 clat percentiles (msec): 00:28:02.561 | 1.00th=[ 15], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 00:28:02.561 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 68], 00:28:02.561 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:28:02.561 | 99.00th=[ 128], 99.50th=[ 165], 99.90th=[ 201], 99.95th=[ 201], 00:28:02.561 | 99.99th=[ 201] 00:28:02.561 bw ( KiB/s): min= 592, max= 1536, per=4.74%, avg=976.40, stdev=209.35, samples=20 00:28:02.561 iops : min= 148, max= 384, avg=244.10, stdev=52.34, samples=20 00:28:02.561 lat (msec) : 10=0.65%, 20=2.20%, 50=24.78%, 100=64.56%, 250=7.81% 00:28:02.561 cpu : usr=42.29%, sys=0.93%, ctx=1315, majf=0, minf=9 00:28:02.561 IO depths : 1=0.7%, 2=1.9%, 4=8.3%, 8=75.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.561 complete : 0=0.0%, 4=89.9%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 issued rwts: total=2458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.562 filename1: (groupid=0, jobs=1): err= 0: pid=110654: Mon Dec 16 11:48:06 2024 00:28:02.562 read: IOPS=223, BW=896KiB/s (917kB/s)(8976KiB/10019msec) 00:28:02.562 slat (usec): min=4, max=8018, avg=16.40, stdev=189.16 00:28:02.562 clat (msec): min=15, max=270, avg=71.33, stdev=28.09 00:28:02.562 lat (msec): min=15, max=270, avg=71.34, stdev=28.10 00:28:02.562 clat percentiles (msec): 00:28:02.562 | 1.00th=[ 19], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 50], 00:28:02.562 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 68], 60.00th=[ 73], 00:28:02.562 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 102], 95.00th=[ 108], 00:28:02.562 | 99.00th=[ 163], 99.50th=[ 245], 99.90th=[ 271], 99.95th=[ 271], 00:28:02.562 | 99.99th=[ 271] 00:28:02.562 bw ( KiB/s): min= 424, max= 1152, per=4.26%, avg=876.26, stdev=181.94, samples=19 00:28:02.562 iops : min= 106, max= 288, avg=219.05, stdev=45.49, samples=19 00:28:02.562 lat (msec) : 20=1.43%, 50=19.65%, 100=68.36%, 250=10.34%, 500=0.22% 00:28:02.562 cpu : usr=36.97%, sys=0.71%, ctx=1069, majf=0, minf=9 00:28:02.562 IO depths : 1=1.5%, 2=3.3%, 4=11.7%, 8=71.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:28:02.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.562 filename1: (groupid=0, jobs=1): err= 0: pid=110655: Mon Dec 16 11:48:06 2024 00:28:02.562 read: IOPS=230, BW=920KiB/s (942kB/s)(9232KiB/10033msec) 00:28:02.562 slat (usec): min=7, max=3578, avg=14.10, stdev=93.97 00:28:02.562 clat (msec): min=15, max=190, avg=69.41, stdev=22.68 00:28:02.562 lat (msec): min=16, max=190, avg=69.43, stdev=22.68 00:28:02.562 clat percentiles (msec): 00:28:02.562 | 1.00th=[ 30], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 52], 00:28:02.562 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:28:02.562 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 107], 00:28:02.562 | 99.00th=[ 161], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 190], 00:28:02.562 | 99.99th=[ 190] 00:28:02.562 bw ( KiB/s): min= 640, max= 1120, per=4.45%, avg=916.40, stdev=133.17, samples=20 00:28:02.562 iops : min= 160, max= 280, avg=229.10, stdev=33.29, samples=20 00:28:02.562 lat (msec) : 20=0.69%, 50=18.76%, 100=74.00%, 250=6.54% 00:28:02.562 cpu : usr=46.74%, sys=1.21%, ctx=1323, majf=0, minf=9 00:28:02.562 IO depths : 1=1.6%, 2=3.3%, 4=11.1%, 8=72.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:28:02.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.562 filename1: (groupid=0, jobs=1): err= 0: pid=110656: Mon Dec 16 11:48:06 2024 00:28:02.562 read: IOPS=182, BW=729KiB/s (747kB/s)(7296KiB/10005msec) 00:28:02.562 slat (usec): min=5, max=8035, avg=20.97, stdev=265.41 00:28:02.562 clat (msec): min=10, max=236, avg=87.65, stdev=26.22 00:28:02.562 lat (msec): min=10, max=236, avg=87.67, stdev=26.22 00:28:02.562 clat percentiles (msec): 00:28:02.562 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 63], 20.00th=[ 72], 00:28:02.562 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 94], 00:28:02.562 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:28:02.562 | 99.00th=[ 157], 99.50th=[ 171], 99.90th=[ 236], 99.95th=[ 236], 00:28:02.562 | 99.99th=[ 236] 00:28:02.562 bw ( KiB/s): min= 512, max= 768, per=3.42%, avg=704.89, stdev=68.50, samples=19 00:28:02.562 iops : min= 128, max= 192, avg=176.21, stdev=17.11, samples=19 00:28:02.562 lat (msec) : 20=2.08%, 50=4.28%, 100=71.05%, 250=22.59% 00:28:02.562 cpu : usr=32.68%, sys=0.64%, ctx=858, majf=0, minf=9 00:28:02.562 IO depths : 1=3.5%, 2=7.4%, 4=18.5%, 8=61.6%, 16=9.0%, 32=0.0%, >=64=0.0% 00:28:02.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 complete : 0=0.0%, 4=92.2%, 8=2.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.562 filename1: (groupid=0, jobs=1): err= 0: pid=110657: Mon Dec 16 11:48:06 2024 00:28:02.562 read: IOPS=194, BW=779KiB/s (798kB/s)(7796KiB/10003msec) 00:28:02.562 slat (usec): min=4, max=8020, avg=17.46, stdev=202.89 00:28:02.562 clat (msec): min=10, max=188, avg=82.01, stdev=25.16 00:28:02.562 lat (msec): min=10, max=188, avg=82.03, stdev=25.16 00:28:02.562 clat percentiles (msec): 00:28:02.562 | 1.00th=[ 12], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 69], 00:28:02.562 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 85], 00:28:02.562 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 123], 00:28:02.562 | 99.00th=[ 157], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 190], 00:28:02.562 | 99.99th=[ 190] 00:28:02.562 bw ( KiB/s): min= 560, max= 896, per=3.67%, avg=755.63, stdev=102.51, samples=19 00:28:02.562 iops : min= 140, max= 224, avg=188.89, stdev=25.64, samples=19 00:28:02.562 lat (msec) : 20=2.46%, 50=6.62%, 100=72.60%, 250=18.32% 00:28:02.562 cpu : usr=37.19%, sys=0.80%, ctx=1038, majf=0, minf=9 00:28:02.562 IO depths : 1=2.1%, 2=4.7%, 4=14.0%, 8=68.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:28:02.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.562 filename1: (groupid=0, jobs=1): err= 0: pid=110658: Mon Dec 16 11:48:06 2024 00:28:02.562 read: IOPS=228, BW=915KiB/s (937kB/s)(9180KiB/10032msec) 00:28:02.562 slat (usec): min=4, max=4031, avg=13.80, stdev=84.11 00:28:02.562 clat (msec): min=9, max=240, avg=69.79, stdev=25.72 00:28:02.562 lat (msec): min=9, max=240, avg=69.81, stdev=25.72 00:28:02.562 clat percentiles (msec): 00:28:02.562 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:28:02.562 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 73], 00:28:02.562 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:28:02.562 | 99.00th=[ 144], 99.50th=[ 192], 99.90th=[ 241], 99.95th=[ 241], 00:28:02.562 | 99.99th=[ 241] 00:28:02.562 bw ( KiB/s): min= 512, max= 1560, per=4.44%, avg=913.55, stdev=199.84, samples=20 00:28:02.562 iops : min= 128, max= 390, avg=228.35, stdev=49.94, samples=20 00:28:02.562 lat (msec) : 10=0.70%, 20=2.79%, 50=18.61%, 100=71.76%, 250=6.14% 00:28:02.562 cpu : usr=33.77%, sys=1.03%, ctx=1045, majf=0, minf=9 00:28:02.562 IO depths : 1=1.0%, 2=2.1%, 4=8.3%, 8=75.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:02.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 complete : 0=0.0%, 4=89.7%, 8=6.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.562 filename1: (groupid=0, jobs=1): err= 0: pid=110659: Mon Dec 16 11:48:06 2024 00:28:02.562 read: IOPS=197, BW=788KiB/s (807kB/s)(7884KiB/10005msec) 00:28:02.562 slat (usec): min=7, max=8033, avg=16.02, stdev=180.77 00:28:02.562 clat (msec): min=10, max=202, avg=81.12, stdev=24.47 00:28:02.562 lat (msec): min=10, max=202, avg=81.14, stdev=24.47 00:28:02.562 clat percentiles (msec): 00:28:02.562 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 62], 00:28:02.562 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 84], 00:28:02.562 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 121], 00:28:02.562 | 99.00th=[ 146], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 203], 00:28:02.562 | 99.99th=[ 203] 00:28:02.562 bw ( KiB/s): min= 640, max= 1072, per=3.73%, avg=768.95, stdev=96.52, samples=19 00:28:02.562 iops : min= 160, max= 268, avg=192.21, stdev=24.15, samples=19 00:28:02.562 lat (msec) : 20=1.62%, 50=6.90%, 100=75.14%, 250=16.34% 00:28:02.562 cpu : usr=32.49%, sys=0.66%, ctx=857, majf=0, minf=10 00:28:02.562 IO depths : 1=1.8%, 2=4.2%, 4=12.6%, 8=70.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:28:02.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.562 issued rwts: total=1971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.562 filename1: (groupid=0, jobs=1): err= 0: pid=110660: Mon Dec 16 11:48:06 2024 00:28:02.562 read: IOPS=214, BW=859KiB/s (880kB/s)(8604KiB/10011msec) 00:28:02.562 slat (usec): min=4, max=7019, avg=21.71, stdev=227.97 00:28:02.562 clat (msec): min=12, max=200, avg=74.31, stdev=21.29 00:28:02.562 lat (msec): min=12, max=200, avg=74.34, stdev=21.29 00:28:02.562 clat percentiles (msec): 00:28:02.562 | 1.00th=[ 18], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 57], 00:28:02.562 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 79], 00:28:02.562 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 95], 95.00th=[ 106], 00:28:02.562 | 99.00th=[ 136], 99.50th=[ 167], 99.90th=[ 201], 99.95th=[ 201], 00:28:02.562 | 99.99th=[ 201] 00:28:02.562 bw ( KiB/s): min= 608, max= 1072, per=4.11%, avg=845.05, stdev=133.76, samples=19 00:28:02.563 iops : min= 152, max= 268, avg=211.26, stdev=33.44, samples=19 00:28:02.563 lat (msec) : 20=1.49%, 50=10.88%, 100=80.75%, 250=6.88% 00:28:02.563 cpu : usr=40.30%, sys=0.95%, ctx=1504, majf=0, minf=9 00:28:02.563 IO depths : 1=2.1%, 2=4.8%, 4=14.5%, 8=67.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:02.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.563 filename2: (groupid=0, jobs=1): err= 0: pid=110661: Mon Dec 16 11:48:06 2024 00:28:02.563 read: IOPS=192, BW=771KiB/s (789kB/s)(7712KiB/10003msec) 00:28:02.563 slat (usec): min=4, max=8035, avg=16.59, stdev=182.83 00:28:02.563 clat (msec): min=13, max=226, avg=82.88, stdev=26.40 00:28:02.563 lat (msec): min=13, max=226, avg=82.90, stdev=26.40 00:28:02.563 clat percentiles (msec): 00:28:02.563 | 1.00th=[ 38], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 67], 00:28:02.563 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 84], 00:28:02.563 | 70.00th=[ 89], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 128], 00:28:02.563 | 99.00th=[ 209], 99.50th=[ 228], 99.90th=[ 228], 99.95th=[ 228], 00:28:02.563 | 99.99th=[ 228] 00:28:02.563 bw ( KiB/s): min= 512, max= 944, per=3.69%, avg=759.47, stdev=111.94, samples=19 00:28:02.563 iops : min= 128, max= 236, avg=189.84, stdev=27.98, samples=19 00:28:02.563 lat (msec) : 20=0.16%, 50=4.56%, 100=76.45%, 250=18.83% 00:28:02.563 cpu : usr=40.69%, sys=1.01%, ctx=1215, majf=0, minf=9 00:28:02.563 IO depths : 1=2.3%, 2=4.9%, 4=13.6%, 8=67.9%, 16=11.3%, 32=0.0%, >=64=0.0% 00:28:02.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.563 filename2: (groupid=0, jobs=1): err= 0: pid=110662: Mon Dec 16 11:48:06 2024 00:28:02.563 read: IOPS=187, BW=749KiB/s (767kB/s)(7512KiB/10023msec) 00:28:02.563 slat (usec): min=4, max=8023, avg=22.41, stdev=277.25 00:28:02.563 clat (msec): min=17, max=229, avg=85.19, stdev=27.53 00:28:02.563 lat (msec): min=17, max=229, avg=85.21, stdev=27.53 00:28:02.563 clat percentiles (msec): 00:28:02.563 | 1.00th=[ 23], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 68], 00:28:02.563 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 86], 00:28:02.563 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 120], 95.00th=[ 132], 00:28:02.563 | 99.00th=[ 169], 99.50th=[ 230], 99.90th=[ 230], 99.95th=[ 230], 00:28:02.563 | 99.99th=[ 230] 00:28:02.563 bw ( KiB/s): min= 344, max= 1026, per=3.62%, avg=744.90, stdev=153.20, samples=20 00:28:02.563 iops : min= 86, max= 256, avg=186.20, stdev=38.25, samples=20 00:28:02.563 lat (msec) : 20=0.85%, 50=7.19%, 100=69.86%, 250=22.10% 00:28:02.563 cpu : usr=34.61%, sys=0.58%, ctx=956, majf=0, minf=9 00:28:02.563 IO depths : 1=3.1%, 2=6.8%, 4=17.4%, 8=62.9%, 16=9.7%, 32=0.0%, >=64=0.0% 00:28:02.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 issued rwts: total=1878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.563 filename2: (groupid=0, jobs=1): err= 0: pid=110663: Mon Dec 16 11:48:06 2024 00:28:02.563 read: IOPS=233, BW=933KiB/s (955kB/s)(9340KiB/10012msec) 00:28:02.563 slat (usec): min=4, max=5991, avg=15.64, stdev=140.28 00:28:02.563 clat (msec): min=13, max=221, avg=68.49, stdev=23.24 00:28:02.563 lat (msec): min=13, max=221, avg=68.50, stdev=23.24 00:28:02.563 clat percentiles (msec): 00:28:02.563 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 50], 00:28:02.563 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 67], 60.00th=[ 71], 00:28:02.563 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:28:02.563 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 222], 99.95th=[ 222], 00:28:02.563 | 99.99th=[ 222] 00:28:02.563 bw ( KiB/s): min= 560, max= 1248, per=4.48%, avg=921.26, stdev=163.52, samples=19 00:28:02.563 iops : min= 140, max= 312, avg=230.32, stdev=40.88, samples=19 00:28:02.563 lat (msec) : 20=0.51%, 50=21.03%, 100=71.73%, 250=6.72% 00:28:02.563 cpu : usr=38.97%, sys=0.88%, ctx=1500, majf=0, minf=9 00:28:02.563 IO depths : 1=0.6%, 2=1.3%, 4=7.7%, 8=77.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:02.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 complete : 0=0.0%, 4=89.2%, 8=6.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 issued rwts: total=2335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.563 filename2: (groupid=0, jobs=1): err= 0: pid=110664: Mon Dec 16 11:48:06 2024 00:28:02.563 read: IOPS=219, BW=878KiB/s (899kB/s)(8792KiB/10017msec) 00:28:02.563 slat (nsec): min=4842, max=67536, avg=12020.09, stdev=6539.95 00:28:02.563 clat (msec): min=31, max=210, avg=72.83, stdev=22.30 00:28:02.563 lat (msec): min=31, max=210, avg=72.85, stdev=22.30 00:28:02.563 clat percentiles (msec): 00:28:02.563 | 1.00th=[ 38], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 56], 00:28:02.563 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:28:02.563 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 101], 95.00th=[ 113], 00:28:02.563 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 211], 99.95th=[ 211], 00:28:02.563 | 99.99th=[ 211] 00:28:02.563 bw ( KiB/s): min= 616, max= 1072, per=4.24%, avg=872.80, stdev=120.12, samples=20 00:28:02.563 iops : min= 154, max= 268, avg=218.20, stdev=30.03, samples=20 00:28:02.563 lat (msec) : 50=13.79%, 100=76.43%, 250=9.78% 00:28:02.563 cpu : usr=41.49%, sys=0.87%, ctx=1096, majf=0, minf=9 00:28:02.563 IO depths : 1=1.9%, 2=3.9%, 4=11.8%, 8=71.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:28:02.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.563 filename2: (groupid=0, jobs=1): err= 0: pid=110665: Mon Dec 16 11:48:06 2024 00:28:02.563 read: IOPS=215, BW=861KiB/s (882kB/s)(8648KiB/10045msec) 00:28:02.563 slat (usec): min=3, max=8026, avg=21.38, stdev=275.93 00:28:02.563 clat (msec): min=4, max=224, avg=74.10, stdev=27.86 00:28:02.563 lat (msec): min=4, max=224, avg=74.13, stdev=27.87 00:28:02.563 clat percentiles (msec): 00:28:02.563 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 47], 20.00th=[ 51], 00:28:02.563 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:28:02.563 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 122], 00:28:02.563 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 226], 00:28:02.563 | 99.99th=[ 226] 00:28:02.563 bw ( KiB/s): min= 512, max= 1624, per=4.17%, avg=858.40, stdev=226.48, samples=20 00:28:02.563 iops : min= 128, max= 406, avg=214.60, stdev=56.62, samples=20 00:28:02.563 lat (msec) : 10=1.48%, 20=2.87%, 50=15.31%, 100=67.25%, 250=13.09% 00:28:02.563 cpu : usr=32.42%, sys=0.85%, ctx=869, majf=0, minf=10 00:28:02.563 IO depths : 1=1.1%, 2=2.8%, 4=12.6%, 8=71.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:28:02.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 complete : 0=0.0%, 4=90.1%, 8=4.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.563 filename2: (groupid=0, jobs=1): err= 0: pid=110666: Mon Dec 16 11:48:06 2024 00:28:02.563 read: IOPS=229, BW=920KiB/s (942kB/s)(9212KiB/10016msec) 00:28:02.563 slat (usec): min=7, max=8025, avg=14.64, stdev=167.06 00:28:02.563 clat (msec): min=14, max=218, avg=69.50, stdev=22.90 00:28:02.563 lat (msec): min=14, max=218, avg=69.52, stdev=22.90 00:28:02.563 clat percentiles (msec): 00:28:02.563 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:28:02.563 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:28:02.563 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 99], 00:28:02.563 | 99.00th=[ 131], 99.50th=[ 205], 99.90th=[ 220], 99.95th=[ 220], 00:28:02.563 | 99.99th=[ 220] 00:28:02.563 bw ( KiB/s): min= 560, max= 1280, per=4.44%, avg=914.45, stdev=164.77, samples=20 00:28:02.563 iops : min= 140, max= 320, avg=228.60, stdev=41.20, samples=20 00:28:02.563 lat (msec) : 20=2.08%, 50=18.89%, 100=74.51%, 250=4.52% 00:28:02.563 cpu : usr=32.68%, sys=0.66%, ctx=861, majf=0, minf=9 00:28:02.563 IO depths : 1=0.4%, 2=0.9%, 4=5.5%, 8=78.8%, 16=14.4%, 32=0.0%, >=64=0.0% 00:28:02.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 complete : 0=0.0%, 4=89.3%, 8=7.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.563 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.563 filename2: (groupid=0, jobs=1): err= 0: pid=110667: Mon Dec 16 11:48:06 2024 00:28:02.563 read: IOPS=185, BW=743KiB/s (761kB/s)(7432KiB/10005msec) 00:28:02.563 slat (usec): min=5, max=8042, avg=19.19, stdev=208.31 00:28:02.563 clat (msec): min=10, max=274, avg=86.01, stdev=27.20 00:28:02.563 lat (msec): min=10, max=274, avg=86.03, stdev=27.20 00:28:02.564 clat percentiles (msec): 00:28:02.564 | 1.00th=[ 28], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 70], 00:28:02.564 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 87], 00:28:02.564 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 114], 95.00th=[ 128], 00:28:02.564 | 99.00th=[ 171], 99.50th=[ 255], 99.90th=[ 275], 99.95th=[ 275], 00:28:02.564 | 99.99th=[ 275] 00:28:02.564 bw ( KiB/s): min= 384, max= 816, per=3.54%, avg=728.11, stdev=95.05, samples=19 00:28:02.564 iops : min= 96, max= 204, avg=182.00, stdev=23.77, samples=19 00:28:02.564 lat (msec) : 20=0.86%, 50=3.28%, 100=76.59%, 250=18.41%, 500=0.86% 00:28:02.564 cpu : usr=42.48%, sys=0.96%, ctx=1448, majf=0, minf=9 00:28:02.564 IO depths : 1=2.2%, 2=4.6%, 4=13.1%, 8=68.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:28:02.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.564 complete : 0=0.0%, 4=90.9%, 8=4.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.564 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.564 filename2: (groupid=0, jobs=1): err= 0: pid=110668: Mon Dec 16 11:48:06 2024 00:28:02.564 read: IOPS=197, BW=790KiB/s (809kB/s)(7904KiB/10005msec) 00:28:02.564 slat (usec): min=6, max=8043, avg=16.67, stdev=180.81 00:28:02.564 clat (msec): min=12, max=253, avg=80.84, stdev=23.92 00:28:02.564 lat (msec): min=12, max=253, avg=80.86, stdev=23.92 00:28:02.564 clat percentiles (msec): 00:28:02.564 | 1.00th=[ 17], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 69], 00:28:02.564 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 83], 00:28:02.564 | 70.00th=[ 89], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 113], 00:28:02.564 | 99.00th=[ 157], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 253], 00:28:02.564 | 99.99th=[ 253] 00:28:02.564 bw ( KiB/s): min= 472, max= 896, per=3.75%, avg=771.05, stdev=93.65, samples=19 00:28:02.564 iops : min= 118, max= 224, avg=192.74, stdev=23.43, samples=19 00:28:02.564 lat (msec) : 20=1.62%, 50=4.35%, 100=78.39%, 250=15.38%, 500=0.25% 00:28:02.564 cpu : usr=41.97%, sys=1.09%, ctx=1415, majf=0, minf=9 00:28:02.564 IO depths : 1=3.1%, 2=6.7%, 4=16.8%, 8=63.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:28:02.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.564 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.564 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:02.564 00:28:02.564 Run status group 0 (all jobs): 00:28:02.564 READ: bw=20.1MiB/s (21.1MB/s), 729KiB/s-1037KiB/s (747kB/s-1062kB/s), io=202MiB (212MB), run=10001-10045msec 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 bdev_null0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 [2024-12-16 11:48:06.540640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 bdev_null1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.564 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:02.565 { 00:28:02.565 "params": { 00:28:02.565 "name": "Nvme$subsystem", 00:28:02.565 "trtype": "$TEST_TRANSPORT", 00:28:02.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.565 "adrfam": "ipv4", 00:28:02.565 "trsvcid": "$NVMF_PORT", 00:28:02.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.565 "hdgst": ${hdgst:-false}, 00:28:02.565 "ddgst": ${ddgst:-false} 00:28:02.565 }, 00:28:02.565 "method": "bdev_nvme_attach_controller" 00:28:02.565 } 00:28:02.565 EOF 00:28:02.565 )") 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:02.565 { 00:28:02.565 "params": { 00:28:02.565 "name": "Nvme$subsystem", 00:28:02.565 "trtype": "$TEST_TRANSPORT", 00:28:02.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.565 "adrfam": "ipv4", 00:28:02.565 "trsvcid": "$NVMF_PORT", 00:28:02.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.565 "hdgst": ${hdgst:-false}, 00:28:02.565 "ddgst": ${ddgst:-false} 00:28:02.565 }, 00:28:02.565 "method": "bdev_nvme_attach_controller" 00:28:02.565 } 00:28:02.565 EOF 00:28:02.565 )") 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:02.565 "params": { 00:28:02.565 "name": "Nvme0", 00:28:02.565 "trtype": "tcp", 00:28:02.565 "traddr": "10.0.0.3", 00:28:02.565 "adrfam": "ipv4", 00:28:02.565 "trsvcid": "4420", 00:28:02.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.565 "hdgst": false, 00:28:02.565 "ddgst": false 00:28:02.565 }, 00:28:02.565 "method": "bdev_nvme_attach_controller" 00:28:02.565 },{ 00:28:02.565 "params": { 00:28:02.565 "name": "Nvme1", 00:28:02.565 "trtype": "tcp", 00:28:02.565 "traddr": "10.0.0.3", 00:28:02.565 "adrfam": "ipv4", 00:28:02.565 "trsvcid": "4420", 00:28:02.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.565 "hdgst": false, 00:28:02.565 "ddgst": false 00:28:02.565 }, 00:28:02.565 "method": "bdev_nvme_attach_controller" 00:28:02.565 }' 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:02.565 11:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.565 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:02.565 ... 00:28:02.565 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:02.565 ... 00:28:02.565 fio-3.35 00:28:02.565 Starting 4 threads 00:28:07.841 00:28:07.841 filename0: (groupid=0, jobs=1): err= 0: pid=110795: Mon Dec 16 11:48:12 2024 00:28:07.841 read: IOPS=1898, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5002msec) 00:28:07.841 slat (nsec): min=7087, max=72530, avg=13311.63, stdev=4972.46 00:28:07.841 clat (usec): min=2939, max=15500, avg=4153.42, stdev=840.98 00:28:07.841 lat (usec): min=2947, max=15526, avg=4166.73, stdev=841.02 00:28:07.841 clat percentiles (usec): 00:28:07.841 | 1.00th=[ 3851], 5.00th=[ 3982], 10.00th=[ 3982], 20.00th=[ 4015], 00:28:07.841 | 30.00th=[ 4047], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4080], 00:28:07.841 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4228], 00:28:07.841 | 99.00th=[ 4359], 99.50th=[13698], 99.90th=[13960], 99.95th=[15401], 00:28:07.841 | 99.99th=[15533] 00:28:07.841 bw ( KiB/s): min=12800, max=15616, per=24.96%, avg=15160.89, stdev=903.08, samples=9 00:28:07.841 iops : min= 1600, max= 1952, avg=1895.11, stdev=112.89, samples=9 00:28:07.841 lat (msec) : 4=13.72%, 10=85.52%, 20=0.76% 00:28:07.841 cpu : usr=94.82%, sys=4.04%, ctx=7, majf=0, minf=10 00:28:07.841 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.841 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.841 issued rwts: total=9496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.841 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:07.841 filename0: (groupid=0, jobs=1): err= 0: pid=110796: Mon Dec 16 11:48:12 2024 00:28:07.841 read: IOPS=1898, BW=14.8MiB/s (15.5MB/s)(74.2MiB/5003msec) 00:28:07.841 slat (nsec): min=5259, max=55558, avg=15379.41, stdev=3867.37 00:28:07.841 clat (usec): min=2629, max=20446, avg=4138.35, stdev=861.57 00:28:07.841 lat (usec): min=2636, max=20462, avg=4153.73, stdev=861.54 00:28:07.841 clat percentiles (usec): 00:28:07.841 | 1.00th=[ 3818], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 3982], 00:28:07.841 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:28:07.841 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4228], 00:28:07.841 | 99.00th=[ 4359], 99.50th=[13829], 99.90th=[13960], 99.95th=[16712], 00:28:07.841 | 99.99th=[20317] 00:28:07.841 bw ( KiB/s): min=12825, max=15744, per=24.97%, avg=15163.67, stdev=899.48, samples=9 00:28:07.841 iops : min= 1603, max= 1968, avg=1895.44, stdev=112.48, samples=9 00:28:07.841 lat (msec) : 4=21.77%, 10=77.57%, 20=0.65%, 50=0.01% 00:28:07.841 cpu : usr=94.42%, sys=4.52%, ctx=5, majf=0, minf=9 00:28:07.841 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.841 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.841 issued rwts: total=9496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.841 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:07.841 filename1: (groupid=0, jobs=1): err= 0: pid=110797: Mon Dec 16 11:48:12 2024 00:28:07.841 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.2MiB/5003msec) 00:28:07.841 slat (nsec): min=4366, max=55465, avg=15663.88, stdev=4217.24 00:28:07.841 clat (usec): min=2626, max=16975, avg=4138.37, stdev=859.48 00:28:07.841 lat (usec): min=2640, max=16989, avg=4154.04, stdev=859.27 00:28:07.841 clat percentiles (usec): 00:28:07.841 | 1.00th=[ 3818], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 3982], 00:28:07.841 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:28:07.841 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4228], 00:28:07.841 | 99.00th=[ 4359], 99.50th=[13698], 99.90th=[13960], 99.95th=[16909], 00:28:07.841 | 99.99th=[16909] 00:28:07.841 bw ( KiB/s): min=12800, max=15744, per=24.96%, avg=15160.89, stdev=907.61, samples=9 00:28:07.841 iops : min= 1600, max= 1968, avg=1895.11, stdev=113.45, samples=9 00:28:07.841 lat (msec) : 4=21.76%, 10=77.49%, 20=0.76% 00:28:07.841 cpu : usr=94.58%, sys=4.30%, ctx=11, majf=0, minf=9 00:28:07.841 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.841 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.841 issued rwts: total=9492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.841 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:07.841 filename1: (groupid=0, jobs=1): err= 0: pid=110798: Mon Dec 16 11:48:12 2024 00:28:07.841 read: IOPS=1899, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5004msec) 00:28:07.841 slat (nsec): min=6705, max=49650, avg=8997.73, stdev=3250.49 00:28:07.841 clat (usec): min=2431, max=13956, avg=4165.05, stdev=817.68 00:28:07.842 lat (usec): min=2446, max=13986, avg=4174.05, stdev=818.81 00:28:07.842 clat percentiles (usec): 00:28:07.842 | 1.00th=[ 3851], 5.00th=[ 4015], 10.00th=[ 4015], 20.00th=[ 4047], 00:28:07.842 | 30.00th=[ 4047], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:28:07.842 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4228], 00:28:07.842 | 99.00th=[ 4359], 99.50th=[13698], 99.90th=[13829], 99.95th=[13960], 00:28:07.842 | 99.99th=[13960] 00:28:07.842 bw ( KiB/s): min=13056, max=15616, per=24.99%, avg=15175.11, stdev=819.88, samples=9 00:28:07.842 iops : min= 1632, max= 1952, avg=1896.89, stdev=102.48, samples=9 00:28:07.842 lat (msec) : 4=5.13%, 10=94.19%, 20=0.67% 00:28:07.842 cpu : usr=94.12%, sys=4.82%, ctx=6, majf=0, minf=0 00:28:07.842 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.842 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.842 issued rwts: total=9504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:07.842 00:28:07.842 Run status group 0 (all jobs): 00:28:07.842 READ: bw=59.3MiB/s (62.2MB/s), 14.8MiB/s-14.8MiB/s (15.5MB/s-15.6MB/s), io=297MiB (311MB), run=5002-5004msec 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 00:28:07.842 real 0m24.019s 00:28:07.842 user 2m6.832s 00:28:07.842 sys 0m4.768s 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 ************************************ 00:28:07.842 END TEST fio_dif_rand_params 00:28:07.842 ************************************ 00:28:07.842 11:48:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:07.842 11:48:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:07.842 11:48:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 ************************************ 00:28:07.842 START TEST fio_dif_digest 00:28:07.842 ************************************ 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 bdev_null0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.842 [2024-12-16 11:48:12.872993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:07.842 { 00:28:07.842 "params": { 00:28:07.842 "name": "Nvme$subsystem", 00:28:07.842 "trtype": "$TEST_TRANSPORT", 00:28:07.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.842 "adrfam": "ipv4", 00:28:07.842 "trsvcid": "$NVMF_PORT", 00:28:07.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.842 "hdgst": ${hdgst:-false}, 00:28:07.842 "ddgst": ${ddgst:-false} 00:28:07.842 }, 00:28:07.842 "method": "bdev_nvme_attach_controller" 00:28:07.842 } 00:28:07.842 EOF 00:28:07.842 )") 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:07.842 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:07.843 "params": { 00:28:07.843 "name": "Nvme0", 00:28:07.843 "trtype": "tcp", 00:28:07.843 "traddr": "10.0.0.3", 00:28:07.843 "adrfam": "ipv4", 00:28:07.843 "trsvcid": "4420", 00:28:07.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:07.843 "hdgst": true, 00:28:07.843 "ddgst": true 00:28:07.843 }, 00:28:07.843 "method": "bdev_nvme_attach_controller" 00:28:07.843 }' 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:07.843 11:48:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.843 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:07.843 ... 00:28:07.843 fio-3.35 00:28:07.843 Starting 3 threads 00:28:20.046 00:28:20.046 filename0: (groupid=0, jobs=1): err= 0: pid=110904: Mon Dec 16 11:48:23 2024 00:28:20.046 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10007msec) 00:28:20.046 slat (nsec): min=5796, max=50840, avg=13453.33, stdev=4112.03 00:28:20.046 clat (usec): min=6914, max=19156, avg=15134.44, stdev=1410.60 00:28:20.046 lat (usec): min=6922, max=19169, avg=15147.90, stdev=1410.55 00:28:20.046 clat percentiles (usec): 00:28:20.046 | 1.00th=[ 9765], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:28:20.046 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:28:20.046 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16712], 95.00th=[17171], 00:28:20.046 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:28:20.046 | 99.99th=[19268] 00:28:20.046 bw ( KiB/s): min=24064, max=26624, per=32.58%, avg=25330.53, stdev=629.81, samples=19 00:28:20.046 iops : min= 188, max= 208, avg=197.89, stdev= 4.92, samples=19 00:28:20.046 lat (msec) : 10=1.21%, 20=98.79% 00:28:20.046 cpu : usr=92.86%, sys=5.72%, ctx=22, majf=0, minf=0 00:28:20.046 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:20.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.046 issued rwts: total=1981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:20.046 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:20.046 filename0: (groupid=0, jobs=1): err= 0: pid=110905: Mon Dec 16 11:48:23 2024 00:28:20.046 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(298MiB/10003msec) 00:28:20.046 slat (nsec): min=7396, max=76881, avg=13845.41, stdev=3652.15 00:28:20.046 clat (usec): min=6957, max=55816, avg=12554.15, stdev=2281.02 00:28:20.046 lat (usec): min=6965, max=55829, avg=12568.00, stdev=2281.02 00:28:20.046 clat percentiles (usec): 00:28:20.046 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:28:20.046 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:28:20.046 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:28:20.046 | 99.00th=[14615], 99.50th=[15008], 99.90th=[53740], 99.95th=[54264], 00:28:20.046 | 99.99th=[55837] 00:28:20.046 bw ( KiB/s): min=27904, max=31488, per=39.25%, avg=30517.89, stdev=929.23, samples=19 00:28:20.046 iops : min= 218, max= 246, avg=238.42, stdev= 7.26, samples=19 00:28:20.046 lat (msec) : 10=0.46%, 20=99.29%, 100=0.25% 00:28:20.046 cpu : usr=93.19%, sys=5.23%, ctx=7, majf=0, minf=0 00:28:20.046 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:20.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.046 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:20.046 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:20.046 filename0: (groupid=0, jobs=1): err= 0: pid=110906: Mon Dec 16 11:48:23 2024 00:28:20.046 read: IOPS=170, BW=21.4MiB/s (22.4MB/s)(214MiB/10007msec) 00:28:20.046 slat (nsec): min=7021, max=38820, avg=12676.33, stdev=3768.93 00:28:20.046 clat (usec): min=7074, max=22184, avg=17522.92, stdev=1179.58 00:28:20.047 lat (usec): min=7086, max=22197, avg=17535.60, stdev=1179.95 00:28:20.047 clat percentiles (usec): 00:28:20.047 | 1.00th=[11207], 5.00th=[16188], 10.00th=[16581], 20.00th=[16909], 00:28:20.047 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17695], 00:28:20.047 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19006], 00:28:20.047 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21103], 99.95th=[22152], 00:28:20.047 | 99.99th=[22152] 00:28:20.047 bw ( KiB/s): min=21504, max=23040, per=28.13%, avg=21870.05, stdev=505.49, samples=19 00:28:20.047 iops : min= 168, max= 180, avg=170.84, stdev= 3.96, samples=19 00:28:20.047 lat (msec) : 10=0.12%, 20=99.59%, 50=0.29% 00:28:20.047 cpu : usr=93.52%, sys=5.23%, ctx=9, majf=0, minf=0 00:28:20.047 IO depths : 1=10.2%, 2=89.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:20.047 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:20.047 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:20.047 00:28:20.047 Run status group 0 (all jobs): 00:28:20.047 READ: bw=75.9MiB/s (79.6MB/s), 21.4MiB/s-29.8MiB/s (22.4MB/s-31.3MB/s), io=760MiB (797MB), run=10003-10007msec 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.047 00:28:20.047 real 0m11.123s 00:28:20.047 user 0m28.715s 00:28:20.047 sys 0m1.917s 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.047 ************************************ 00:28:20.047 END TEST fio_dif_digest 00:28:20.047 ************************************ 00:28:20.047 11:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:20.047 11:48:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:20.047 11:48:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.047 rmmod nvme_tcp 00:28:20.047 rmmod nvme_fabrics 00:28:20.047 rmmod nvme_keyring 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 110158 ']' 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 110158 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 110158 ']' 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 110158 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110158 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.047 killing process with pid 110158 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110158' 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@973 -- # kill 110158 00:28:20.047 11:48:24 nvmf_dif -- common/autotest_common.sh@978 -- # wait 110158 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:20.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.047 Waiting for block devices as requested 00:28:20.047 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:20.047 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:20.047 11:48:24 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.047 11:48:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:20.047 11:48:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.047 11:48:25 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:20.047 00:28:20.047 real 1m0.486s 00:28:20.047 user 3m52.445s 00:28:20.047 sys 0m14.991s 00:28:20.047 11:48:25 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.047 11:48:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.047 ************************************ 00:28:20.047 END TEST nvmf_dif 00:28:20.047 ************************************ 00:28:20.047 11:48:25 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:20.047 11:48:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:20.047 11:48:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.047 11:48:25 -- common/autotest_common.sh@10 -- # set +x 00:28:20.047 ************************************ 00:28:20.047 START TEST nvmf_abort_qd_sizes 00:28:20.047 ************************************ 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:20.047 * Looking for test storage... 00:28:20.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.047 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.047 --rc genhtml_branch_coverage=1 00:28:20.047 --rc genhtml_function_coverage=1 00:28:20.047 --rc genhtml_legend=1 00:28:20.047 --rc geninfo_all_blocks=1 00:28:20.048 --rc geninfo_unexecuted_blocks=1 00:28:20.048 00:28:20.048 ' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.048 --rc genhtml_branch_coverage=1 00:28:20.048 --rc genhtml_function_coverage=1 00:28:20.048 --rc genhtml_legend=1 00:28:20.048 --rc geninfo_all_blocks=1 00:28:20.048 --rc geninfo_unexecuted_blocks=1 00:28:20.048 00:28:20.048 ' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.048 --rc genhtml_branch_coverage=1 00:28:20.048 --rc genhtml_function_coverage=1 00:28:20.048 --rc genhtml_legend=1 00:28:20.048 --rc geninfo_all_blocks=1 00:28:20.048 --rc geninfo_unexecuted_blocks=1 00:28:20.048 00:28:20.048 ' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.048 --rc genhtml_branch_coverage=1 00:28:20.048 --rc genhtml_function_coverage=1 00:28:20.048 --rc genhtml_legend=1 00:28:20.048 --rc geninfo_all_blocks=1 00:28:20.048 --rc geninfo_unexecuted_blocks=1 00:28:20.048 00:28:20.048 ' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.048 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:20.048 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:20.307 Cannot find device "nvmf_init_br" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:20.307 Cannot find device "nvmf_init_br2" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:20.307 Cannot find device "nvmf_tgt_br" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:20.307 Cannot find device "nvmf_tgt_br2" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:20.307 Cannot find device "nvmf_init_br" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:20.307 Cannot find device "nvmf_init_br2" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:20.307 Cannot find device "nvmf_tgt_br" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:20.307 Cannot find device "nvmf_tgt_br2" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:20.307 Cannot find device "nvmf_br" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:20.307 Cannot find device "nvmf_init_if" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:20.307 Cannot find device "nvmf_init_if2" 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:20.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:20.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:20.307 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:20.567 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:20.567 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:28:20.567 00:28:20.567 --- 10.0.0.3 ping statistics --- 00:28:20.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.567 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:20.567 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:20.567 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:28:20.567 00:28:20.567 --- 10.0.0.4 ping statistics --- 00:28:20.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.567 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:20.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:28:20.567 00:28:20.567 --- 10.0.0.1 ping statistics --- 00:28:20.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.567 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:20.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:28:20.567 00:28:20.567 --- 10.0.0.2 ping statistics --- 00:28:20.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.567 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:28:20.567 11:48:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:21.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:21.503 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:21.503 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=111552 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 111552 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 111552 ']' 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.503 11:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:21.503 [2024-12-16 11:48:26.865111] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:28:21.503 [2024-12-16 11:48:26.865226] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.762 [2024-12-16 11:48:27.015203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.762 [2024-12-16 11:48:27.103205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.762 [2024-12-16 11:48:27.103296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.762 [2024-12-16 11:48:27.103333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.762 [2024-12-16 11:48:27.103350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.762 [2024-12-16 11:48:27.103363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.762 [2024-12-16 11:48:27.104954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.762 [2024-12-16 11:48:27.105106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.762 [2024-12-16 11:48:27.105282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.762 [2024-12-16 11:48:27.105289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.813 11:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.813 ************************************ 00:28:22.813 START TEST spdk_target_abort 00:28:22.813 ************************************ 00:28:22.813 11:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:28:22.814 11:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:22.814 11:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:22.814 11:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.814 11:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.814 spdk_targetn1 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.814 [2024-12-16 11:48:28.060894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.814 [2024-12-16 11:48:28.098244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:22.814 11:48:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:26.101 Initializing NVMe Controllers 00:28:26.101 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:26.101 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:26.101 Initialization complete. Launching workers. 00:28:26.101 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9976, failed: 0 00:28:26.101 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1192, failed to submit 8784 00:28:26.101 success 742, unsuccessful 450, failed 0 00:28:26.101 11:48:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:26.101 11:48:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.388 Initializing NVMe Controllers 00:28:29.388 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:29.388 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:29.388 Initialization complete. Launching workers. 00:28:29.388 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5974, failed: 0 00:28:29.388 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 4744 00:28:29.388 success 290, unsuccessful 940, failed 0 00:28:29.388 11:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:29.388 11:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:32.676 Initializing NVMe Controllers 00:28:32.676 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:32.676 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:32.676 Initialization complete. Launching workers. 00:28:32.676 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28346, failed: 0 00:28:32.676 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2605, failed to submit 25741 00:28:32.676 success 318, unsuccessful 2287, failed 0 00:28:32.676 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:32.676 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.676 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.676 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.676 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:32.676 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.676 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 111552 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 111552 ']' 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 111552 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111552 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.612 killing process with pid 111552 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111552' 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 111552 00:28:33.612 11:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 111552 00:28:33.871 00:28:33.871 real 0m11.249s 00:28:33.871 user 0m45.351s 00:28:33.871 sys 0m1.768s 00:28:33.871 11:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.871 11:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.871 ************************************ 00:28:33.871 END TEST spdk_target_abort 00:28:33.871 ************************************ 00:28:33.871 11:48:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:33.871 11:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:33.871 11:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.871 11:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:34.130 ************************************ 00:28:34.130 START TEST kernel_target_abort 00:28:34.130 ************************************ 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:34.130 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:34.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:34.388 Waiting for block devices as requested 00:28:34.388 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:34.648 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:34.648 No valid GPT data, bailing 00:28:34.648 11:48:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:34.648 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:34.907 No valid GPT data, bailing 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:34.907 No valid GPT data, bailing 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:34.907 No valid GPT data, bailing 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 --hostid=503bc55d-5584-47ba-b171-ff181f376052 -a 10.0.0.1 -t tcp -s 4420 00:28:34.907 00:28:34.907 Discovery Log Number of Records 2, Generation counter 2 00:28:34.907 =====Discovery Log Entry 0====== 00:28:34.907 trtype: tcp 00:28:34.907 adrfam: ipv4 00:28:34.907 subtype: current discovery subsystem 00:28:34.907 treq: not specified, sq flow control disable supported 00:28:34.907 portid: 1 00:28:34.907 trsvcid: 4420 00:28:34.907 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:34.907 traddr: 10.0.0.1 00:28:34.907 eflags: none 00:28:34.907 sectype: none 00:28:34.907 =====Discovery Log Entry 1====== 00:28:34.907 trtype: tcp 00:28:34.907 adrfam: ipv4 00:28:34.907 subtype: nvme subsystem 00:28:34.907 treq: not specified, sq flow control disable supported 00:28:34.907 portid: 1 00:28:34.907 trsvcid: 4420 00:28:34.907 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:34.907 traddr: 10.0.0.1 00:28:34.907 eflags: none 00:28:34.907 sectype: none 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:34.907 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:34.908 11:48:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:38.193 Initializing NVMe Controllers 00:28:38.193 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:38.193 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:38.193 Initialization complete. Launching workers. 00:28:38.193 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34935, failed: 0 00:28:38.193 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34935, failed to submit 0 00:28:38.193 success 0, unsuccessful 34935, failed 0 00:28:38.193 11:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:38.193 11:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.478 Initializing NVMe Controllers 00:28:41.478 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:41.478 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:41.478 Initialization complete. Launching workers. 00:28:41.478 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68798, failed: 0 00:28:41.478 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29701, failed to submit 39097 00:28:41.478 success 0, unsuccessful 29701, failed 0 00:28:41.478 11:48:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:41.478 11:48:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.789 Initializing NVMe Controllers 00:28:44.789 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:44.789 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:44.789 Initialization complete. Launching workers. 00:28:44.789 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77882, failed: 0 00:28:44.789 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19472, failed to submit 58410 00:28:44.789 success 0, unsuccessful 19472, failed 0 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:44.789 11:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:45.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:47.259 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:47.259 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:47.259 00:28:47.259 real 0m13.213s 00:28:47.259 user 0m6.343s 00:28:47.259 sys 0m4.292s 00:28:47.259 11:48:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.259 11:48:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.259 ************************************ 00:28:47.259 END TEST kernel_target_abort 00:28:47.259 ************************************ 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.259 rmmod nvme_tcp 00:28:47.259 rmmod nvme_fabrics 00:28:47.259 rmmod nvme_keyring 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 111552 ']' 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 111552 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 111552 ']' 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 111552 00:28:47.259 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (111552) - No such process 00:28:47.259 Process with pid 111552 is not found 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 111552 is not found' 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:47.259 11:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:47.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:47.826 Waiting for block devices as requested 00:28:47.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:47.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:47.826 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:48.085 00:28:48.085 real 0m28.232s 00:28:48.085 user 0m53.073s 00:28:48.085 sys 0m7.549s 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.085 11:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:48.085 ************************************ 00:28:48.085 END TEST nvmf_abort_qd_sizes 00:28:48.085 ************************************ 00:28:48.345 11:48:53 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:48.345 11:48:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:48.345 11:48:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.345 11:48:53 -- common/autotest_common.sh@10 -- # set +x 00:28:48.345 ************************************ 00:28:48.345 START TEST keyring_file 00:28:48.345 ************************************ 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:48.345 * Looking for test storage... 00:28:48.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.345 11:48:53 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.345 --rc genhtml_branch_coverage=1 00:28:48.345 --rc genhtml_function_coverage=1 00:28:48.345 --rc genhtml_legend=1 00:28:48.345 --rc geninfo_all_blocks=1 00:28:48.345 --rc geninfo_unexecuted_blocks=1 00:28:48.345 00:28:48.345 ' 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.345 --rc genhtml_branch_coverage=1 00:28:48.345 --rc genhtml_function_coverage=1 00:28:48.345 --rc genhtml_legend=1 00:28:48.345 --rc geninfo_all_blocks=1 00:28:48.345 --rc geninfo_unexecuted_blocks=1 00:28:48.345 00:28:48.345 ' 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.345 --rc genhtml_branch_coverage=1 00:28:48.345 --rc genhtml_function_coverage=1 00:28:48.345 --rc genhtml_legend=1 00:28:48.345 --rc geninfo_all_blocks=1 00:28:48.345 --rc geninfo_unexecuted_blocks=1 00:28:48.345 00:28:48.345 ' 00:28:48.345 11:48:53 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.345 --rc genhtml_branch_coverage=1 00:28:48.345 --rc genhtml_function_coverage=1 00:28:48.345 --rc genhtml_legend=1 00:28:48.345 --rc geninfo_all_blocks=1 00:28:48.345 --rc geninfo_unexecuted_blocks=1 00:28:48.345 00:28:48.346 ' 00:28:48.346 11:48:53 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:48.346 11:48:53 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.346 11:48:53 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.346 11:48:53 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.346 11:48:53 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.346 11:48:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.346 11:48:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.346 11:48:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.346 11:48:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:48.346 11:48:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.346 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:48.346 11:48:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:48.346 11:48:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:48.346 11:48:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:48.346 11:48:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:48.346 11:48:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:48.346 11:48:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DcAH0dy9HY 00:28:48.346 11:48:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:48.346 11:48:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DcAH0dy9HY 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DcAH0dy9HY 00:28:48.605 11:48:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DcAH0dy9HY 00:28:48.605 11:48:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uNk4bbTp0H 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:48.605 11:48:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:48.605 11:48:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:48.605 11:48:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:48.605 11:48:53 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:48.605 11:48:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:48.605 11:48:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uNk4bbTp0H 00:28:48.605 11:48:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uNk4bbTp0H 00:28:48.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.605 11:48:53 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uNk4bbTp0H 00:28:48.605 11:48:53 keyring_file -- keyring/file.sh@30 -- # tgtpid=112474 00:28:48.605 11:48:53 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:48.605 11:48:53 keyring_file -- keyring/file.sh@32 -- # waitforlisten 112474 00:28:48.605 11:48:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112474 ']' 00:28:48.605 11:48:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.605 11:48:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.605 11:48:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.605 11:48:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.605 11:48:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:48.605 [2024-12-16 11:48:53.936715] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:28:48.605 [2024-12-16 11:48:53.937067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112474 ] 00:28:48.863 [2024-12-16 11:48:54.080992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.863 [2024-12-16 11:48:54.154225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.122 11:48:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.122 11:48:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:49.122 11:48:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:49.122 11:48:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.122 11:48:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:49.381 [2024-12-16 11:48:54.539469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.381 null0 00:28:49.381 [2024-12-16 11:48:54.571437] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:49.381 [2024-12-16 11:48:54.571694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.381 11:48:54 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:49.381 [2024-12-16 11:48:54.599458] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:49.381 2024/12/16 11:48:54 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:28:49.381 request: 00:28:49.381 { 00:28:49.381 "method": "nvmf_subsystem_add_listener", 00:28:49.381 "params": { 00:28:49.381 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:49.381 "secure_channel": false, 00:28:49.381 "listen_address": { 00:28:49.381 "trtype": "tcp", 00:28:49.381 "traddr": "127.0.0.1", 00:28:49.381 "trsvcid": "4420" 00:28:49.381 } 00:28:49.381 } 00:28:49.381 } 00:28:49.381 Got JSON-RPC error response 00:28:49.381 GoRPCClient: error on JSON-RPC call 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:49.381 11:48:54 keyring_file -- keyring/file.sh@47 -- # bperfpid=112497 00:28:49.381 11:48:54 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:49.381 11:48:54 keyring_file -- keyring/file.sh@49 -- # waitforlisten 112497 /var/tmp/bperf.sock 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112497 ']' 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.381 11:48:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:49.382 [2024-12-16 11:48:54.673731] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:28:49.382 [2024-12-16 11:48:54.673975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112497 ] 00:28:49.640 [2024-12-16 11:48:54.825620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.640 [2024-12-16 11:48:54.889307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.574 11:48:55 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.574 11:48:55 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:50.574 11:48:55 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:50.574 11:48:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:50.832 11:48:55 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uNk4bbTp0H 00:28:50.832 11:48:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uNk4bbTp0H 00:28:51.090 11:48:56 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:51.090 11:48:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:51.090 11:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.090 11:48:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.090 11:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:51.348 11:48:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DcAH0dy9HY == \/\t\m\p\/\t\m\p\.\D\c\A\H\0\d\y\9\H\Y ]] 00:28:51.348 11:48:56 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:51.348 11:48:56 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:51.348 11:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.348 11:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:51.348 11:48:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.608 11:48:56 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uNk4bbTp0H == \/\t\m\p\/\t\m\p\.\u\N\k\4\b\b\T\p\0\H ]] 00:28:51.608 11:48:56 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:51.608 11:48:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:51.608 11:48:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:51.608 11:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:51.608 11:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.608 11:48:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.869 11:48:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:51.869 11:48:57 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:51.869 11:48:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:51.869 11:48:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:51.869 11:48:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:51.869 11:48:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.869 11:48:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.127 11:48:57 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:52.127 11:48:57 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:52.127 11:48:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:52.695 [2024-12-16 11:48:57.806433] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:52.695 nvme0n1 00:28:52.695 11:48:57 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:52.695 11:48:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:52.695 11:48:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.695 11:48:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.695 11:48:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.695 11:48:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:52.953 11:48:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:52.953 11:48:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:52.953 11:48:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.953 11:48:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:52.953 11:48:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.953 11:48:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.953 11:48:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:53.211 11:48:58 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:53.211 11:48:58 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.211 Running I/O for 1 seconds... 00:28:54.405 11597.00 IOPS, 45.30 MiB/s 00:28:54.405 Latency(us) 00:28:54.405 [2024-12-16T11:48:59.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.405 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:54.405 nvme0n1 : 1.01 11664.71 45.57 0.00 0.00 10948.25 5123.72 22282.24 00:28:54.405 [2024-12-16T11:48:59.819Z] =================================================================================================================== 00:28:54.405 [2024-12-16T11:48:59.819Z] Total : 11664.71 45.57 0.00 0.00 10948.25 5123.72 22282.24 00:28:54.405 { 00:28:54.405 "results": [ 00:28:54.405 { 00:28:54.405 "job": "nvme0n1", 00:28:54.405 "core_mask": "0x2", 00:28:54.405 "workload": "randrw", 00:28:54.405 "percentage": 50, 00:28:54.405 "status": "finished", 00:28:54.405 "queue_depth": 128, 00:28:54.405 "io_size": 4096, 00:28:54.405 "runtime": 1.005169, 00:28:54.405 "iops": 11664.705139135807, 00:28:54.405 "mibps": 45.56525444974925, 00:28:54.405 "io_failed": 0, 00:28:54.405 "io_timeout": 0, 00:28:54.405 "avg_latency_us": 10948.253103314595, 00:28:54.405 "min_latency_us": 5123.723636363637, 00:28:54.405 "max_latency_us": 22282.24 00:28:54.405 } 00:28:54.405 ], 00:28:54.405 "core_count": 1 00:28:54.405 } 00:28:54.405 11:48:59 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:54.405 11:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:54.664 11:48:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:54.664 11:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.664 11:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:54.664 11:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.664 11:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.664 11:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.922 11:49:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:54.922 11:49:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:54.922 11:49:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:54.922 11:49:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.922 11:49:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.922 11:49:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:54.922 11:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.181 11:49:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:55.181 11:49:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.181 11:49:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:55.181 11:49:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.181 11:49:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:55.181 11:49:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.181 11:49:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:55.181 11:49:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.181 11:49:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.181 11:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.440 [2024-12-16 11:49:00.787472] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:55.440 [2024-12-16 11:49:00.788064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10811f0 (107): Transport endpoint is not connected 00:28:55.440 [2024-12-16 11:49:00.789060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10811f0 (9): Bad file descriptor 00:28:55.440 [2024-12-16 11:49:00.790056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:55.440 [2024-12-16 11:49:00.790086] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:55.440 [2024-12-16 11:49:00.790098] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:55.440 [2024-12-16 11:49:00.790109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:55.440 2024/12/16 11:49:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:55.440 request: 00:28:55.440 { 00:28:55.440 "method": "bdev_nvme_attach_controller", 00:28:55.440 "params": { 00:28:55.440 "name": "nvme0", 00:28:55.440 "trtype": "tcp", 00:28:55.440 "traddr": "127.0.0.1", 00:28:55.440 "adrfam": "ipv4", 00:28:55.440 "trsvcid": "4420", 00:28:55.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.440 "prchk_reftag": false, 00:28:55.440 "prchk_guard": false, 00:28:55.440 "hdgst": false, 00:28:55.440 "ddgst": false, 00:28:55.440 "psk": "key1", 00:28:55.440 "allow_unrecognized_csi": false 00:28:55.440 } 00:28:55.440 } 00:28:55.440 Got JSON-RPC error response 00:28:55.440 GoRPCClient: error on JSON-RPC call 00:28:55.440 11:49:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:55.440 11:49:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.440 11:49:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.440 11:49:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.440 11:49:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:55.440 11:49:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:55.440 11:49:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:55.440 11:49:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.440 11:49:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:55.440 11:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.007 11:49:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:56.007 11:49:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:56.007 11:49:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.007 11:49:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:56.007 11:49:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.007 11:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.007 11:49:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:56.265 11:49:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:56.265 11:49:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:56.265 11:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:56.523 11:49:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:56.523 11:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:56.781 11:49:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:56.781 11:49:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.781 11:49:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:57.038 11:49:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:57.038 11:49:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.DcAH0dy9HY 00:28:57.038 11:49:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:57.038 11:49:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:57.038 11:49:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:57.038 11:49:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:57.038 11:49:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.038 11:49:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:57.038 11:49:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.038 11:49:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:57.038 11:49:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:57.296 [2024-12-16 11:49:02.608360] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DcAH0dy9HY': 0100660 00:28:57.296 [2024-12-16 11:49:02.608405] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:57.296 2024/12/16 11:49:02 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.DcAH0dy9HY], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:28:57.296 request: 00:28:57.296 { 00:28:57.296 "method": "keyring_file_add_key", 00:28:57.296 "params": { 00:28:57.296 "name": "key0", 00:28:57.296 "path": "/tmp/tmp.DcAH0dy9HY" 00:28:57.296 } 00:28:57.296 } 00:28:57.296 Got JSON-RPC error response 00:28:57.296 GoRPCClient: error on JSON-RPC call 00:28:57.296 11:49:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:57.296 11:49:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.296 11:49:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.296 11:49:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.296 11:49:02 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.DcAH0dy9HY 00:28:57.296 11:49:02 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:57.296 11:49:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DcAH0dy9HY 00:28:57.554 11:49:02 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.DcAH0dy9HY 00:28:57.554 11:49:02 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:57.554 11:49:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:57.554 11:49:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:57.554 11:49:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:57.554 11:49:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.554 11:49:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:58.119 11:49:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:58.119 11:49:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.119 11:49:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:58.119 11:49:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.119 11:49:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:58.119 11:49:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.119 11:49:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:58.119 11:49:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.119 11:49:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.119 11:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.119 [2024-12-16 11:49:03.508613] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DcAH0dy9HY': No such file or directory 00:28:58.119 [2024-12-16 11:49:03.508657] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:58.119 [2024-12-16 11:49:03.508679] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:58.119 [2024-12-16 11:49:03.508694] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:58.119 [2024-12-16 11:49:03.508705] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:58.119 [2024-12-16 11:49:03.508714] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:58.119 2024/12/16 11:49:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:28:58.119 request: 00:28:58.119 { 00:28:58.119 "method": "bdev_nvme_attach_controller", 00:28:58.119 "params": { 00:28:58.119 "name": "nvme0", 00:28:58.119 "trtype": "tcp", 00:28:58.119 "traddr": "127.0.0.1", 00:28:58.119 "adrfam": "ipv4", 00:28:58.119 "trsvcid": "4420", 00:28:58.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:58.119 "prchk_reftag": false, 00:28:58.119 "prchk_guard": false, 00:28:58.119 "hdgst": false, 00:28:58.119 "ddgst": false, 00:28:58.119 "psk": "key0", 00:28:58.119 "allow_unrecognized_csi": false 00:28:58.119 } 00:28:58.119 } 00:28:58.119 Got JSON-RPC error response 00:28:58.119 GoRPCClient: error on JSON-RPC call 00:28:58.377 11:49:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:58.377 11:49:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:58.377 11:49:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:58.377 11:49:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:58.377 11:49:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:58.377 11:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:58.635 11:49:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S0tQjbyYQJ 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:58.635 11:49:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:58.635 11:49:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:58.635 11:49:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:58.635 11:49:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:58.635 11:49:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:58.635 11:49:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S0tQjbyYQJ 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S0tQjbyYQJ 00:28:58.635 11:49:03 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.S0tQjbyYQJ 00:28:58.635 11:49:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S0tQjbyYQJ 00:28:58.635 11:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S0tQjbyYQJ 00:28:58.897 11:49:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.897 11:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.160 nvme0n1 00:28:59.160 11:49:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:59.160 11:49:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.160 11:49:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.160 11:49:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.160 11:49:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.160 11:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.725 11:49:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:59.725 11:49:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:59.725 11:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:59.983 11:49:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:59.983 11:49:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:59.983 11:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.983 11:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.983 11:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.240 11:49:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:29:00.240 11:49:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:29:00.240 11:49:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:00.240 11:49:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.240 11:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.240 11:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.240 11:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.498 11:49:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:29:00.498 11:49:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:00.498 11:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:01.064 11:49:06 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:29:01.064 11:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.064 11:49:06 keyring_file -- keyring/file.sh@105 -- # jq length 00:29:01.322 11:49:06 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:29:01.322 11:49:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S0tQjbyYQJ 00:29:01.322 11:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S0tQjbyYQJ 00:29:01.580 11:49:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uNk4bbTp0H 00:29:01.580 11:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uNk4bbTp0H 00:29:01.838 11:49:07 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:01.838 11:49:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:02.096 nvme0n1 00:29:02.096 11:49:07 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:29:02.096 11:49:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:02.353 11:49:07 keyring_file -- keyring/file.sh@113 -- # config='{ 00:29:02.353 "subsystems": [ 00:29:02.353 { 00:29:02.353 "subsystem": "keyring", 00:29:02.353 "config": [ 00:29:02.353 { 00:29:02.353 "method": "keyring_file_add_key", 00:29:02.353 "params": { 00:29:02.353 "name": "key0", 00:29:02.353 "path": "/tmp/tmp.S0tQjbyYQJ" 00:29:02.353 } 00:29:02.353 }, 00:29:02.353 { 00:29:02.353 "method": "keyring_file_add_key", 00:29:02.353 "params": { 00:29:02.353 "name": "key1", 00:29:02.353 "path": "/tmp/tmp.uNk4bbTp0H" 00:29:02.353 } 00:29:02.353 } 00:29:02.353 ] 00:29:02.353 }, 00:29:02.353 { 00:29:02.353 "subsystem": "iobuf", 00:29:02.353 "config": [ 00:29:02.353 { 00:29:02.353 "method": "iobuf_set_options", 00:29:02.353 "params": { 00:29:02.353 "enable_numa": false, 00:29:02.353 "large_bufsize": 135168, 00:29:02.353 "large_pool_count": 1024, 00:29:02.353 "small_bufsize": 8192, 00:29:02.353 "small_pool_count": 8192 00:29:02.353 } 00:29:02.353 } 00:29:02.353 ] 00:29:02.353 }, 00:29:02.353 { 00:29:02.353 "subsystem": "sock", 00:29:02.353 "config": [ 00:29:02.353 { 00:29:02.353 "method": "sock_set_default_impl", 00:29:02.353 "params": { 00:29:02.353 "impl_name": "posix" 00:29:02.353 } 00:29:02.353 }, 00:29:02.353 { 00:29:02.353 "method": "sock_impl_set_options", 00:29:02.353 "params": { 00:29:02.353 "enable_ktls": false, 00:29:02.353 "enable_placement_id": 0, 00:29:02.353 "enable_quickack": false, 00:29:02.353 "enable_recv_pipe": true, 00:29:02.353 "enable_zerocopy_send_client": false, 00:29:02.353 "enable_zerocopy_send_server": true, 00:29:02.353 "impl_name": "ssl", 00:29:02.353 "recv_buf_size": 4096, 00:29:02.353 "send_buf_size": 4096, 00:29:02.353 "tls_version": 0, 00:29:02.354 "zerocopy_threshold": 0 00:29:02.354 } 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "method": "sock_impl_set_options", 00:29:02.354 "params": { 00:29:02.354 "enable_ktls": false, 00:29:02.354 "enable_placement_id": 0, 00:29:02.354 "enable_quickack": false, 00:29:02.354 "enable_recv_pipe": true, 00:29:02.354 "enable_zerocopy_send_client": false, 00:29:02.354 "enable_zerocopy_send_server": true, 00:29:02.354 "impl_name": "posix", 00:29:02.354 "recv_buf_size": 2097152, 00:29:02.354 "send_buf_size": 2097152, 00:29:02.354 "tls_version": 0, 00:29:02.354 "zerocopy_threshold": 0 00:29:02.354 } 00:29:02.354 } 00:29:02.354 ] 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "subsystem": "vmd", 00:29:02.354 "config": [] 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "subsystem": "accel", 00:29:02.354 "config": [ 00:29:02.354 { 00:29:02.354 "method": "accel_set_options", 00:29:02.354 "params": { 00:29:02.354 "buf_count": 2048, 00:29:02.354 "large_cache_size": 16, 00:29:02.354 "sequence_count": 2048, 00:29:02.354 "small_cache_size": 128, 00:29:02.354 "task_count": 2048 00:29:02.354 } 00:29:02.354 } 00:29:02.354 ] 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "subsystem": "bdev", 00:29:02.354 "config": [ 00:29:02.354 { 00:29:02.354 "method": "bdev_set_options", 00:29:02.354 "params": { 00:29:02.354 "bdev_auto_examine": true, 00:29:02.354 "bdev_io_cache_size": 256, 00:29:02.354 "bdev_io_pool_size": 65535, 00:29:02.354 "iobuf_large_cache_size": 16, 00:29:02.354 "iobuf_small_cache_size": 128 00:29:02.354 } 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "method": "bdev_raid_set_options", 00:29:02.354 "params": { 00:29:02.354 "process_max_bandwidth_mb_sec": 0, 00:29:02.354 "process_window_size_kb": 1024 00:29:02.354 } 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "method": "bdev_iscsi_set_options", 00:29:02.354 "params": { 00:29:02.354 "timeout_sec": 30 00:29:02.354 } 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "method": "bdev_nvme_set_options", 00:29:02.354 "params": { 00:29:02.354 "action_on_timeout": "none", 00:29:02.354 "allow_accel_sequence": false, 00:29:02.354 "arbitration_burst": 0, 00:29:02.354 "bdev_retry_count": 3, 00:29:02.354 "ctrlr_loss_timeout_sec": 0, 00:29:02.354 "delay_cmd_submit": true, 00:29:02.354 "dhchap_dhgroups": [ 00:29:02.354 "null", 00:29:02.354 "ffdhe2048", 00:29:02.354 "ffdhe3072", 00:29:02.354 "ffdhe4096", 00:29:02.354 "ffdhe6144", 00:29:02.354 "ffdhe8192" 00:29:02.354 ], 00:29:02.354 "dhchap_digests": [ 00:29:02.354 "sha256", 00:29:02.354 "sha384", 00:29:02.354 "sha512" 00:29:02.354 ], 00:29:02.354 "disable_auto_failback": false, 00:29:02.354 "fast_io_fail_timeout_sec": 0, 00:29:02.354 "generate_uuids": false, 00:29:02.354 "high_priority_weight": 0, 00:29:02.354 "io_path_stat": false, 00:29:02.354 "io_queue_requests": 512, 00:29:02.354 "keep_alive_timeout_ms": 10000, 00:29:02.354 "low_priority_weight": 0, 00:29:02.354 "medium_priority_weight": 0, 00:29:02.354 "nvme_adminq_poll_period_us": 10000, 00:29:02.354 "nvme_error_stat": false, 00:29:02.354 "nvme_ioq_poll_period_us": 0, 00:29:02.354 "rdma_cm_event_timeout_ms": 0, 00:29:02.354 "rdma_max_cq_size": 0, 00:29:02.354 "rdma_srq_size": 0, 00:29:02.354 "rdma_umr_per_io": false, 00:29:02.354 "reconnect_delay_sec": 0, 00:29:02.354 "timeout_admin_us": 0, 00:29:02.354 "timeout_us": 0, 00:29:02.354 "transport_ack_timeout": 0, 00:29:02.354 "transport_retry_count": 4, 00:29:02.354 "transport_tos": 0 00:29:02.354 } 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "method": "bdev_nvme_attach_controller", 00:29:02.354 "params": { 00:29:02.354 "adrfam": "IPv4", 00:29:02.354 "ctrlr_loss_timeout_sec": 0, 00:29:02.354 "ddgst": false, 00:29:02.354 "fast_io_fail_timeout_sec": 0, 00:29:02.354 "hdgst": false, 00:29:02.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.354 "multipath": "multipath", 00:29:02.354 "name": "nvme0", 00:29:02.354 "prchk_guard": false, 00:29:02.354 "prchk_reftag": false, 00:29:02.354 "psk": "key0", 00:29:02.354 "reconnect_delay_sec": 0, 00:29:02.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.354 "traddr": "127.0.0.1", 00:29:02.354 "trsvcid": "4420", 00:29:02.354 "trtype": "TCP" 00:29:02.354 } 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "method": "bdev_nvme_set_hotplug", 00:29:02.354 "params": { 00:29:02.354 "enable": false, 00:29:02.354 "period_us": 100000 00:29:02.354 } 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "method": "bdev_wait_for_examine" 00:29:02.354 } 00:29:02.354 ] 00:29:02.354 }, 00:29:02.354 { 00:29:02.354 "subsystem": "nbd", 00:29:02.354 "config": [] 00:29:02.354 } 00:29:02.354 ] 00:29:02.354 }' 00:29:02.354 11:49:07 keyring_file -- keyring/file.sh@115 -- # killprocess 112497 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112497 ']' 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112497 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112497 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.354 killing process with pid 112497 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112497' 00:29:02.354 Received shutdown signal, test time was about 1.000000 seconds 00:29:02.354 00:29:02.354 Latency(us) 00:29:02.354 [2024-12-16T11:49:07.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.354 [2024-12-16T11:49:07.768Z] =================================================================================================================== 00:29:02.354 [2024-12-16T11:49:07.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@973 -- # kill 112497 00:29:02.354 11:49:07 keyring_file -- common/autotest_common.sh@978 -- # wait 112497 00:29:02.613 11:49:07 keyring_file -- keyring/file.sh@118 -- # bperfpid=112982 00:29:02.613 11:49:07 keyring_file -- keyring/file.sh@120 -- # waitforlisten 112982 /var/tmp/bperf.sock 00:29:02.613 11:49:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112982 ']' 00:29:02.613 11:49:07 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:02.613 11:49:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.613 11:49:07 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:29:02.613 "subsystems": [ 00:29:02.613 { 00:29:02.613 "subsystem": "keyring", 00:29:02.613 "config": [ 00:29:02.613 { 00:29:02.613 "method": "keyring_file_add_key", 00:29:02.613 "params": { 00:29:02.613 "name": "key0", 00:29:02.613 "path": "/tmp/tmp.S0tQjbyYQJ" 00:29:02.613 } 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "method": "keyring_file_add_key", 00:29:02.613 "params": { 00:29:02.613 "name": "key1", 00:29:02.613 "path": "/tmp/tmp.uNk4bbTp0H" 00:29:02.613 } 00:29:02.613 } 00:29:02.613 ] 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "subsystem": "iobuf", 00:29:02.613 "config": [ 00:29:02.613 { 00:29:02.613 "method": "iobuf_set_options", 00:29:02.613 "params": { 00:29:02.613 "enable_numa": false, 00:29:02.613 "large_bufsize": 135168, 00:29:02.613 "large_pool_count": 1024, 00:29:02.613 "small_bufsize": 8192, 00:29:02.613 "small_pool_count": 8192 00:29:02.613 } 00:29:02.613 } 00:29:02.613 ] 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "subsystem": "sock", 00:29:02.613 "config": [ 00:29:02.613 { 00:29:02.613 "method": "sock_set_default_impl", 00:29:02.613 "params": { 00:29:02.613 "impl_name": "posix" 00:29:02.613 } 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "method": "sock_impl_set_options", 00:29:02.613 "params": { 00:29:02.613 "enable_ktls": false, 00:29:02.613 "enable_placement_id": 0, 00:29:02.613 "enable_quickack": false, 00:29:02.613 "enable_recv_pipe": true, 00:29:02.613 "enable_zerocopy_send_client": false, 00:29:02.613 "enable_zerocopy_send_server": true, 00:29:02.613 "impl_name": "ssl", 00:29:02.613 "recv_buf_size": 4096, 00:29:02.613 "send_buf_size": 4096, 00:29:02.613 "tls_version": 0, 00:29:02.613 "zerocopy_threshold": 0 00:29:02.613 } 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "method": "sock_impl_set_options", 00:29:02.613 "params": { 00:29:02.613 "enable_ktls": false, 00:29:02.613 "enable_placement_id": 0, 00:29:02.613 "enable_quickack": false, 00:29:02.613 "enable_recv_pipe": true, 00:29:02.613 "enable_zerocopy_send_client": false, 00:29:02.613 "enable_zerocopy_send_server": true, 00:29:02.613 "impl_name": "posix", 00:29:02.613 "recv_buf_size": 2097152, 00:29:02.613 "send_buf_size": 2097152, 00:29:02.613 "tls_version": 0, 00:29:02.613 "zerocopy_threshold": 0 00:29:02.613 } 00:29:02.613 } 00:29:02.613 ] 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "subsystem": "vmd", 00:29:02.613 "config": [] 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "subsystem": "accel", 00:29:02.613 "config": [ 00:29:02.613 { 00:29:02.613 "method": "accel_set_options", 00:29:02.613 "params": { 00:29:02.613 "buf_count": 2048, 00:29:02.613 "large_cache_size": 16, 00:29:02.613 "sequence_count": 2048, 00:29:02.613 "small_cache_size": 128, 00:29:02.613 "task_count": 2048 00:29:02.613 } 00:29:02.613 } 00:29:02.613 ] 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "subsystem": "bdev", 00:29:02.613 "config": [ 00:29:02.613 { 00:29:02.613 "method": "bdev_set_options", 00:29:02.613 "params": { 00:29:02.613 "bdev_auto_examine": true, 00:29:02.613 "bdev_io_cache_size": 256, 00:29:02.613 "bdev_io_pool_size": 65535, 00:29:02.613 "iobuf_large_cache_size": 16, 00:29:02.613 "iobuf_small_cache_size": 128 00:29:02.613 } 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "method": "bdev_raid_set_options", 00:29:02.613 "params": { 00:29:02.613 "process_max_bandwidth_mb_sec": 0, 00:29:02.613 "process_window_size_kb": 1024 00:29:02.613 } 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "method": "bdev_iscsi_set_options", 00:29:02.613 "params": { 00:29:02.613 "timeout_sec": 30 00:29:02.613 } 00:29:02.613 }, 00:29:02.613 { 00:29:02.613 "method": "bdev_nvme_set_options", 00:29:02.613 "params": { 00:29:02.613 "action_on_timeout": "none", 00:29:02.613 "allow_accel_sequence": false, 00:29:02.613 "arbitration_burst": 0, 00:29:02.613 "bdev_retry_count": 3, 00:29:02.613 "ctrlr_loss_timeout_sec": 0, 00:29:02.613 "delay_cmd_submit": true, 00:29:02.613 "dhchap_dhgroups": [ 00:29:02.613 "null", 00:29:02.613 "ffdhe2048", 00:29:02.613 "ffdhe3072", 00:29:02.613 "ffdhe4096", 00:29:02.613 "ffdhe6144", 00:29:02.613 "ffdhe8192" 00:29:02.613 ], 00:29:02.613 "dhchap_digests": [ 00:29:02.613 "sha256", 00:29:02.613 "sha384", 00:29:02.613 "sha512" 00:29:02.613 ], 00:29:02.613 "disable_auto_failback": false, 00:29:02.613 "fast_io_fail_timeout_sec": 0, 00:29:02.613 "generate_uuids": false, 00:29:02.613 "high_priority_weight": 0, 00:29:02.613 "io_path_stat": false, 00:29:02.613 "io_queue_requests": 512, 00:29:02.613 "keep_alive_timeout_ms": 10000, 00:29:02.613 "low_priority_weight": 0, 00:29:02.613 "medium_priority_weight": 0, 00:29:02.613 "nvme_adminq_poll_period_us": 10000, 00:29:02.613 "nvme_error_stat": false, 00:29:02.613 "nvme_ioq_poll_period_us": 0, 00:29:02.613 "rdma_cm_event_timeout_ms": 0, 00:29:02.613 "rdma_max_cq_size": 0, 00:29:02.614 "rdma_srq_size": 0, 00:29:02.614 "rdma_umr_per_io": false, 00:29:02.614 "reconnect_delay_sec": 0, 00:29:02.614 "timeout_admin_us": 0, 00:29:02.614 "timeout_us": 0, 00:29:02.614 "transport_ack_timeout": 0, 00:29:02.614 "transport_retry_count": 4, 00:29:02.614 "transport_tos": 0 00:29:02.614 } 00:29:02.614 }, 00:29:02.614 { 00:29:02.614 "method": "bdev_nvme_attach_controller", 00:29:02.614 "params": { 00:29:02.614 "adrfam": "IPv4", 00:29:02.614 "ctrlr_loss_timeout_sec": 0, 00:29:02.614 "ddgst": false, 00:29:02.614 "fast_io_fail_timeout_sec": 0, 00:29:02.614 "hdgst": false, 00:29:02.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.614 "multipath": "multipath", 00:29:02.614 "name": "nvme0", 00:29:02.614 "prchk_guard": false, 00:29:02.614 "prchk_reftag": false, 00:29:02.614 "psk": "key0", 00:29:02.614 "reconnect_delay_sec": 0, 00:29:02.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.614 "traddr": "127.0.0.1", 00:29:02.614 "trsvcid": "4420", 00:29:02.614 "trtype": "TCP" 00:29:02.614 } 00:29:02.614 }, 00:29:02.614 { 00:29:02.614 "method": "bdev_nvme_set_hotplug", 00:29:02.614 "params": { 00:29:02.614 "enable": false, 00:29:02.614 "period_us": 100000 00:29:02.614 } 00:29:02.614 }, 00:29:02.614 { 00:29:02.614 "method": "bdev_wait_for_examine" 00:29:02.614 } 00:29:02.614 ] 00:29:02.614 }, 00:29:02.614 { 00:29:02.614 "subsystem": "nbd", 00:29:02.614 "config": [] 00:29:02.614 } 00:29:02.614 ] 00:29:02.614 }' 00:29:02.614 11:49:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.614 11:49:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.614 11:49:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.614 11:49:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:02.614 [2024-12-16 11:49:08.000589] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:29:02.614 [2024-12-16 11:49:08.001535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112982 ] 00:29:02.872 [2024-12-16 11:49:08.147398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.872 [2024-12-16 11:49:08.203248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.130 [2024-12-16 11:49:08.391257] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:03.696 11:49:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.696 11:49:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:03.696 11:49:08 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:03.696 11:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.696 11:49:08 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:03.954 11:49:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:03.954 11:49:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:03.954 11:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:03.954 11:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.954 11:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.954 11:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.954 11:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.213 11:49:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:04.213 11:49:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:04.213 11:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:04.213 11:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.213 11:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.213 11:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.213 11:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.471 11:49:09 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:04.471 11:49:09 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:04.471 11:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:04.471 11:49:09 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:05.037 11:49:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:05.037 11:49:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:05.037 11:49:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.S0tQjbyYQJ /tmp/tmp.uNk4bbTp0H 00:29:05.037 11:49:10 keyring_file -- keyring/file.sh@20 -- # killprocess 112982 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112982 ']' 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112982 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112982 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112982' 00:29:05.037 killing process with pid 112982 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@973 -- # kill 112982 00:29:05.037 Received shutdown signal, test time was about 1.000000 seconds 00:29:05.037 00:29:05.037 Latency(us) 00:29:05.037 [2024-12-16T11:49:10.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.037 [2024-12-16T11:49:10.451Z] =================================================================================================================== 00:29:05.037 [2024-12-16T11:49:10.451Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:05.037 11:49:10 keyring_file -- common/autotest_common.sh@978 -- # wait 112982 00:29:05.296 11:49:10 keyring_file -- keyring/file.sh@21 -- # killprocess 112474 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112474 ']' 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112474 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112474 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.296 killing process with pid 112474 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112474' 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@973 -- # kill 112474 00:29:05.296 11:49:10 keyring_file -- common/autotest_common.sh@978 -- # wait 112474 00:29:05.554 00:29:05.554 real 0m17.382s 00:29:05.554 user 0m43.814s 00:29:05.554 sys 0m3.637s 00:29:05.554 11:49:10 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.554 11:49:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:05.554 ************************************ 00:29:05.554 END TEST keyring_file 00:29:05.554 ************************************ 00:29:05.554 11:49:10 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:29:05.554 11:49:10 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:05.554 11:49:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:05.554 11:49:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.554 11:49:10 -- common/autotest_common.sh@10 -- # set +x 00:29:05.554 ************************************ 00:29:05.554 START TEST keyring_linux 00:29:05.554 ************************************ 00:29:05.554 11:49:10 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:05.554 Joined session keyring: 57216571 00:29:05.812 * Looking for test storage... 00:29:05.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:05.812 11:49:11 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.812 11:49:11 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.812 11:49:11 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.812 11:49:11 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.812 11:49:11 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:05.813 11:49:11 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.813 11:49:11 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.813 --rc genhtml_branch_coverage=1 00:29:05.813 --rc genhtml_function_coverage=1 00:29:05.813 --rc genhtml_legend=1 00:29:05.813 --rc geninfo_all_blocks=1 00:29:05.813 --rc geninfo_unexecuted_blocks=1 00:29:05.813 00:29:05.813 ' 00:29:05.813 11:49:11 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.813 --rc genhtml_branch_coverage=1 00:29:05.813 --rc genhtml_function_coverage=1 00:29:05.813 --rc genhtml_legend=1 00:29:05.813 --rc geninfo_all_blocks=1 00:29:05.813 --rc geninfo_unexecuted_blocks=1 00:29:05.813 00:29:05.813 ' 00:29:05.813 11:49:11 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.813 --rc genhtml_branch_coverage=1 00:29:05.813 --rc genhtml_function_coverage=1 00:29:05.813 --rc genhtml_legend=1 00:29:05.813 --rc geninfo_all_blocks=1 00:29:05.813 --rc geninfo_unexecuted_blocks=1 00:29:05.813 00:29:05.813 ' 00:29:05.813 11:49:11 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.813 --rc genhtml_branch_coverage=1 00:29:05.813 --rc genhtml_function_coverage=1 00:29:05.813 --rc genhtml_legend=1 00:29:05.813 --rc geninfo_all_blocks=1 00:29:05.813 --rc geninfo_unexecuted_blocks=1 00:29:05.813 00:29:05.813 ' 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:503bc55d-5584-47ba-b171-ff181f376052 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=503bc55d-5584-47ba-b171-ff181f376052 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.813 11:49:11 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.813 11:49:11 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.813 11:49:11 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.813 11:49:11 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.813 11:49:11 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:05.813 11:49:11 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:05.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:05.813 /tmp/:spdk-test:key0 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:05.813 11:49:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:05.813 11:49:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:05.813 11:49:11 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:06.072 11:49:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:06.072 /tmp/:spdk-test:key1 00:29:06.072 11:49:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:06.072 11:49:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=113144 00:29:06.072 11:49:11 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:06.072 11:49:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 113144 00:29:06.072 11:49:11 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 113144 ']' 00:29:06.072 11:49:11 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.072 11:49:11 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.072 11:49:11 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.072 11:49:11 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.072 11:49:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:06.072 [2024-12-16 11:49:11.337517] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:29:06.072 [2024-12-16 11:49:11.337623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113144 ] 00:29:06.072 [2024-12-16 11:49:11.482572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.330 [2024-12-16 11:49:11.545234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.588 11:49:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.588 11:49:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:06.588 11:49:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:06.588 11:49:11 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.588 11:49:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:06.588 [2024-12-16 11:49:11.844486] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.588 null0 00:29:06.588 [2024-12-16 11:49:11.876428] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:06.589 [2024-12-16 11:49:11.876649] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:06.589 11:49:11 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.589 11:49:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:06.589 812128755 00:29:06.589 11:49:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:06.589 572395365 00:29:06.589 11:49:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=113168 00:29:06.589 11:49:11 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:06.589 11:49:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 113168 /var/tmp/bperf.sock 00:29:06.589 11:49:11 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 113168 ']' 00:29:06.589 11:49:11 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.589 11:49:11 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.589 11:49:11 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.589 11:49:11 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.589 11:49:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:06.589 [2024-12-16 11:49:11.966193] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:29:06.589 [2024-12-16 11:49:11.966293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113168 ] 00:29:06.847 [2024-12-16 11:49:12.119105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.847 [2024-12-16 11:49:12.175493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.847 11:49:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.847 11:49:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:06.847 11:49:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:06.847 11:49:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:07.105 11:49:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:07.105 11:49:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.671 11:49:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:07.671 11:49:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:07.943 [2024-12-16 11:49:13.111138] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:07.943 nvme0n1 00:29:07.943 11:49:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:07.943 11:49:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:07.943 11:49:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:07.943 11:49:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:07.943 11:49:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.943 11:49:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:08.229 11:49:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:08.229 11:49:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:08.229 11:49:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:08.229 11:49:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:08.229 11:49:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:08.229 11:49:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.229 11:49:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.488 11:49:13 keyring_linux -- keyring/linux.sh@25 -- # sn=812128755 00:29:08.488 11:49:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:08.488 11:49:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:08.488 11:49:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 812128755 == \8\1\2\1\2\8\7\5\5 ]] 00:29:08.488 11:49:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 812128755 00:29:08.488 11:49:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:08.488 11:49:13 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.746 Running I/O for 1 seconds... 00:29:09.680 11839.00 IOPS, 46.25 MiB/s 00:29:09.680 Latency(us) 00:29:09.680 [2024-12-16T11:49:15.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.680 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:09.680 nvme0n1 : 1.01 11824.87 46.19 0.00 0.00 10754.55 4587.52 13762.56 00:29:09.680 [2024-12-16T11:49:15.094Z] =================================================================================================================== 00:29:09.680 [2024-12-16T11:49:15.094Z] Total : 11824.87 46.19 0.00 0.00 10754.55 4587.52 13762.56 00:29:09.680 { 00:29:09.680 "results": [ 00:29:09.680 { 00:29:09.680 "job": "nvme0n1", 00:29:09.680 "core_mask": "0x2", 00:29:09.680 "workload": "randread", 00:29:09.680 "status": "finished", 00:29:09.680 "queue_depth": 128, 00:29:09.680 "io_size": 4096, 00:29:09.680 "runtime": 1.012104, 00:29:09.680 "iops": 11824.871752310039, 00:29:09.680 "mibps": 46.19090528246109, 00:29:09.680 "io_failed": 0, 00:29:09.680 "io_timeout": 0, 00:29:09.680 "avg_latency_us": 10754.548799222168, 00:29:09.680 "min_latency_us": 4587.52, 00:29:09.680 "max_latency_us": 13762.56 00:29:09.680 } 00:29:09.680 ], 00:29:09.680 "core_count": 1 00:29:09.680 } 00:29:09.680 11:49:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:09.680 11:49:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:09.938 11:49:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:09.938 11:49:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:09.938 11:49:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:09.938 11:49:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:09.938 11:49:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:09.938 11:49:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.195 11:49:15 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:10.195 11:49:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:10.195 11:49:15 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:10.195 11:49:15 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:10.195 11:49:15 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:29:10.195 11:49:15 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:10.195 11:49:15 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:10.195 11:49:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.195 11:49:15 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:10.195 11:49:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.195 11:49:15 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:10.195 11:49:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:10.453 [2024-12-16 11:49:15.820285] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:10.453 [2024-12-16 11:49:15.821147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2226270 (107): Transport endpoint is not connected 00:29:10.453 [2024-12-16 11:49:15.822139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2226270 (9): Bad file descriptor 00:29:10.453 [2024-12-16 11:49:15.823138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:10.453 [2024-12-16 11:49:15.823165] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:10.453 [2024-12-16 11:49:15.823176] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:10.453 [2024-12-16 11:49:15.823187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:10.453 2024/12/16 11:49:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:10.453 request: 00:29:10.453 { 00:29:10.453 "method": "bdev_nvme_attach_controller", 00:29:10.453 "params": { 00:29:10.453 "name": "nvme0", 00:29:10.453 "trtype": "tcp", 00:29:10.453 "traddr": "127.0.0.1", 00:29:10.453 "adrfam": "ipv4", 00:29:10.453 "trsvcid": "4420", 00:29:10.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.453 "prchk_reftag": false, 00:29:10.453 "prchk_guard": false, 00:29:10.453 "hdgst": false, 00:29:10.453 "ddgst": false, 00:29:10.453 "psk": ":spdk-test:key1", 00:29:10.453 "allow_unrecognized_csi": false 00:29:10.453 } 00:29:10.453 } 00:29:10.453 Got JSON-RPC error response 00:29:10.453 GoRPCClient: error on JSON-RPC call 00:29:10.453 11:49:15 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:10.453 11:49:15 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.453 11:49:15 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.453 11:49:15 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@33 -- # sn=812128755 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 812128755 00:29:10.454 1 links removed 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@33 -- # sn=572395365 00:29:10.454 11:49:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 572395365 00:29:10.454 1 links removed 00:29:10.712 11:49:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 113168 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 113168 ']' 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 113168 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113168 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.712 killing process with pid 113168 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113168' 00:29:10.712 Received shutdown signal, test time was about 1.000000 seconds 00:29:10.712 00:29:10.712 Latency(us) 00:29:10.712 [2024-12-16T11:49:16.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.712 [2024-12-16T11:49:16.126Z] =================================================================================================================== 00:29:10.712 [2024-12-16T11:49:16.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 113168 00:29:10.712 11:49:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 113168 00:29:10.712 11:49:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 113144 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 113144 ']' 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 113144 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113144 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.712 killing process with pid 113144 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113144' 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 113144 00:29:10.712 11:49:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 113144 00:29:11.279 00:29:11.279 real 0m5.558s 00:29:11.279 user 0m10.922s 00:29:11.279 sys 0m1.658s 00:29:11.279 11:49:16 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.279 ************************************ 00:29:11.279 END TEST keyring_linux 00:29:11.279 ************************************ 00:29:11.279 11:49:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:11.279 11:49:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:11.279 11:49:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:11.279 11:49:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:11.279 11:49:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:11.279 11:49:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:11.279 11:49:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:11.279 11:49:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:11.279 11:49:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.279 11:49:16 -- common/autotest_common.sh@10 -- # set +x 00:29:11.279 11:49:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:11.279 11:49:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:11.279 11:49:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:11.279 11:49:16 -- common/autotest_common.sh@10 -- # set +x 00:29:13.181 INFO: APP EXITING 00:29:13.181 INFO: killing all VMs 00:29:13.181 INFO: killing vhost app 00:29:13.181 INFO: EXIT DONE 00:29:14.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.117 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:14.117 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:14.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.685 Cleaning 00:29:14.685 Removing: /var/run/dpdk/spdk0/config 00:29:14.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:14.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:14.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:14.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:14.685 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:14.685 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:14.685 Removing: /var/run/dpdk/spdk1/config 00:29:14.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:14.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:14.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:14.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:14.685 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:14.685 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:14.685 Removing: /var/run/dpdk/spdk2/config 00:29:14.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:14.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:14.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:14.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:14.685 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:14.685 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:14.685 Removing: /var/run/dpdk/spdk3/config 00:29:14.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:14.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:14.943 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:14.943 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:14.943 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:14.943 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:14.943 Removing: /var/run/dpdk/spdk4/config 00:29:14.943 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:14.943 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:14.943 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:14.943 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:14.943 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:14.943 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:14.943 Removing: /dev/shm/nvmf_trace.0 00:29:14.943 Removing: /dev/shm/spdk_tgt_trace.pid60317 00:29:14.943 Removing: /var/run/dpdk/spdk0 00:29:14.943 Removing: /var/run/dpdk/spdk1 00:29:14.943 Removing: /var/run/dpdk/spdk2 00:29:14.943 Removing: /var/run/dpdk/spdk3 00:29:14.943 Removing: /var/run/dpdk/spdk4 00:29:14.943 Removing: /var/run/dpdk/spdk_pid100014 00:29:14.943 Removing: /var/run/dpdk/spdk_pid100402 00:29:14.943 Removing: /var/run/dpdk/spdk_pid102877 00:29:14.943 Removing: /var/run/dpdk/spdk_pid102925 00:29:14.943 Removing: /var/run/dpdk/spdk_pid103264 00:29:14.943 Removing: /var/run/dpdk/spdk_pid103306 00:29:14.943 Removing: /var/run/dpdk/spdk_pid103706 00:29:14.943 Removing: /var/run/dpdk/spdk_pid104272 00:29:14.943 Removing: /var/run/dpdk/spdk_pid104706 00:29:14.943 Removing: /var/run/dpdk/spdk_pid105751 00:29:14.943 Removing: /var/run/dpdk/spdk_pid106818 00:29:14.943 Removing: /var/run/dpdk/spdk_pid106936 00:29:14.943 Removing: /var/run/dpdk/spdk_pid106994 00:29:14.943 Removing: /var/run/dpdk/spdk_pid108599 00:29:14.943 Removing: /var/run/dpdk/spdk_pid108910 00:29:14.943 Removing: /var/run/dpdk/spdk_pid109251 00:29:14.943 Removing: /var/run/dpdk/spdk_pid109816 00:29:14.943 Removing: /var/run/dpdk/spdk_pid109827 00:29:14.943 Removing: /var/run/dpdk/spdk_pid110221 00:29:14.943 Removing: /var/run/dpdk/spdk_pid110381 00:29:14.943 Removing: /var/run/dpdk/spdk_pid110533 00:29:14.943 Removing: /var/run/dpdk/spdk_pid110630 00:29:14.943 Removing: /var/run/dpdk/spdk_pid110784 00:29:14.943 Removing: /var/run/dpdk/spdk_pid110889 00:29:14.943 Removing: /var/run/dpdk/spdk_pid111621 00:29:14.943 Removing: /var/run/dpdk/spdk_pid111651 00:29:14.943 Removing: /var/run/dpdk/spdk_pid111692 00:29:14.943 Removing: /var/run/dpdk/spdk_pid111941 00:29:14.943 Removing: /var/run/dpdk/spdk_pid111976 00:29:14.943 Removing: /var/run/dpdk/spdk_pid112006 00:29:14.943 Removing: /var/run/dpdk/spdk_pid112474 00:29:14.943 Removing: /var/run/dpdk/spdk_pid112497 00:29:14.943 Removing: /var/run/dpdk/spdk_pid112982 00:29:14.943 Removing: /var/run/dpdk/spdk_pid113144 00:29:14.943 Removing: /var/run/dpdk/spdk_pid113168 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60159 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60317 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60586 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60679 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60705 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60814 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60844 00:29:14.943 Removing: /var/run/dpdk/spdk_pid60984 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61269 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61453 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61543 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61629 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61719 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61752 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61787 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61857 00:29:14.943 Removing: /var/run/dpdk/spdk_pid61939 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62579 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62625 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62681 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62714 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62793 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62821 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62900 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62915 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62966 00:29:14.943 Removing: /var/run/dpdk/spdk_pid62996 00:29:14.943 Removing: /var/run/dpdk/spdk_pid63049 00:29:15.202 Removing: /var/run/dpdk/spdk_pid63079 00:29:15.202 Removing: /var/run/dpdk/spdk_pid63239 00:29:15.202 Removing: /var/run/dpdk/spdk_pid63269 00:29:15.202 Removing: /var/run/dpdk/spdk_pid63357 00:29:15.202 Removing: /var/run/dpdk/spdk_pid63828 00:29:15.202 Removing: /var/run/dpdk/spdk_pid64185 00:29:15.202 Removing: /var/run/dpdk/spdk_pid66710 00:29:15.202 Removing: /var/run/dpdk/spdk_pid66755 00:29:15.202 Removing: /var/run/dpdk/spdk_pid67106 00:29:15.202 Removing: /var/run/dpdk/spdk_pid67152 00:29:15.202 Removing: /var/run/dpdk/spdk_pid67552 00:29:15.202 Removing: /var/run/dpdk/spdk_pid68129 00:29:15.202 Removing: /var/run/dpdk/spdk_pid68568 00:29:15.202 Removing: /var/run/dpdk/spdk_pid69624 00:29:15.202 Removing: /var/run/dpdk/spdk_pid70720 00:29:15.202 Removing: /var/run/dpdk/spdk_pid70838 00:29:15.202 Removing: /var/run/dpdk/spdk_pid70910 00:29:15.202 Removing: /var/run/dpdk/spdk_pid72517 00:29:15.202 Removing: /var/run/dpdk/spdk_pid72867 00:29:15.202 Removing: /var/run/dpdk/spdk_pid76715 00:29:15.202 Removing: /var/run/dpdk/spdk_pid77133 00:29:15.202 Removing: /var/run/dpdk/spdk_pid77757 00:29:15.202 Removing: /var/run/dpdk/spdk_pid78297 00:29:15.202 Removing: /var/run/dpdk/spdk_pid84136 00:29:15.202 Removing: /var/run/dpdk/spdk_pid84642 00:29:15.202 Removing: /var/run/dpdk/spdk_pid84751 00:29:15.202 Removing: /var/run/dpdk/spdk_pid84897 00:29:15.202 Removing: /var/run/dpdk/spdk_pid84955 00:29:15.202 Removing: /var/run/dpdk/spdk_pid84994 00:29:15.202 Removing: /var/run/dpdk/spdk_pid85033 00:29:15.202 Removing: /var/run/dpdk/spdk_pid85204 00:29:15.202 Removing: /var/run/dpdk/spdk_pid85358 00:29:15.202 Removing: /var/run/dpdk/spdk_pid85628 00:29:15.202 Removing: /var/run/dpdk/spdk_pid85762 00:29:15.202 Removing: /var/run/dpdk/spdk_pid86030 00:29:15.202 Removing: /var/run/dpdk/spdk_pid86148 00:29:15.202 Removing: /var/run/dpdk/spdk_pid86280 00:29:15.202 Removing: /var/run/dpdk/spdk_pid86677 00:29:15.202 Removing: /var/run/dpdk/spdk_pid87138 00:29:15.202 Removing: /var/run/dpdk/spdk_pid87140 00:29:15.202 Removing: /var/run/dpdk/spdk_pid87143 00:29:15.202 Removing: /var/run/dpdk/spdk_pid87419 00:29:15.202 Removing: /var/run/dpdk/spdk_pid87683 00:29:15.202 Removing: /var/run/dpdk/spdk_pid88094 00:29:15.202 Removing: /var/run/dpdk/spdk_pid88433 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89030 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89033 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89424 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89444 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89462 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89507 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89512 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89896 00:29:15.202 Removing: /var/run/dpdk/spdk_pid89945 00:29:15.202 Removing: /var/run/dpdk/spdk_pid90337 00:29:15.202 Removing: /var/run/dpdk/spdk_pid90594 00:29:15.202 Removing: /var/run/dpdk/spdk_pid91128 00:29:15.202 Removing: /var/run/dpdk/spdk_pid91739 00:29:15.202 Removing: /var/run/dpdk/spdk_pid93145 00:29:15.202 Removing: /var/run/dpdk/spdk_pid93784 00:29:15.202 Removing: /var/run/dpdk/spdk_pid93786 00:29:15.202 Removing: /var/run/dpdk/spdk_pid95849 00:29:15.202 Removing: /var/run/dpdk/spdk_pid95926 00:29:15.202 Removing: /var/run/dpdk/spdk_pid95997 00:29:15.202 Removing: /var/run/dpdk/spdk_pid96075 00:29:15.202 Removing: /var/run/dpdk/spdk_pid96219 00:29:15.202 Removing: /var/run/dpdk/spdk_pid96296 00:29:15.202 Removing: /var/run/dpdk/spdk_pid96371 00:29:15.202 Removing: /var/run/dpdk/spdk_pid96463 00:29:15.202 Removing: /var/run/dpdk/spdk_pid96839 00:29:15.202 Removing: /var/run/dpdk/spdk_pid97593 00:29:15.202 Removing: /var/run/dpdk/spdk_pid99005 00:29:15.202 Removing: /var/run/dpdk/spdk_pid99199 00:29:15.202 Removing: /var/run/dpdk/spdk_pid99475 00:29:15.202 Clean 00:29:15.461 11:49:20 -- common/autotest_common.sh@1453 -- # return 0 00:29:15.461 11:49:20 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:15.461 11:49:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.461 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:29:15.461 11:49:20 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:15.461 11:49:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.461 11:49:20 -- common/autotest_common.sh@10 -- # set +x 00:29:15.461 11:49:20 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:15.461 11:49:20 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:15.461 11:49:20 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:15.461 11:49:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:15.461 11:49:20 -- spdk/autotest.sh@398 -- # hostname 00:29:15.461 11:49:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:15.720 geninfo: WARNING: invalid characters removed from testname! 00:29:42.319 11:49:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:44.865 11:49:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:47.397 11:49:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:49.952 11:49:55 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:53.245 11:49:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:55.777 11:50:00 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:58.309 11:50:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:58.309 11:50:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:58.309 11:50:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:58.309 11:50:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:58.309 11:50:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:58.309 11:50:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:58.309 + [[ -n 5260 ]] 00:29:58.309 + sudo kill 5260 00:29:58.329 [Pipeline] } 00:29:58.345 [Pipeline] // timeout 00:29:58.351 [Pipeline] } 00:29:58.365 [Pipeline] // stage 00:29:58.370 [Pipeline] } 00:29:58.384 [Pipeline] // catchError 00:29:58.394 [Pipeline] stage 00:29:58.396 [Pipeline] { (Stop VM) 00:29:58.408 [Pipeline] sh 00:29:58.710 + vagrant halt 00:30:01.996 ==> default: Halting domain... 00:30:08.570 [Pipeline] sh 00:30:08.850 + vagrant destroy -f 00:30:12.144 ==> default: Removing domain... 00:30:12.156 [Pipeline] sh 00:30:12.434 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:30:12.443 [Pipeline] } 00:30:12.457 [Pipeline] // stage 00:30:12.463 [Pipeline] } 00:30:12.477 [Pipeline] // dir 00:30:12.482 [Pipeline] } 00:30:12.497 [Pipeline] // wrap 00:30:12.503 [Pipeline] } 00:30:12.516 [Pipeline] // catchError 00:30:12.525 [Pipeline] stage 00:30:12.528 [Pipeline] { (Epilogue) 00:30:12.541 [Pipeline] sh 00:30:12.823 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:19.397 [Pipeline] catchError 00:30:19.399 [Pipeline] { 00:30:19.412 [Pipeline] sh 00:30:19.693 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:19.951 Artifacts sizes are good 00:30:19.959 [Pipeline] } 00:30:19.972 [Pipeline] // catchError 00:30:19.981 [Pipeline] archiveArtifacts 00:30:19.987 Archiving artifacts 00:30:20.097 [Pipeline] cleanWs 00:30:20.107 [WS-CLEANUP] Deleting project workspace... 00:30:20.107 [WS-CLEANUP] Deferred wipeout is used... 00:30:20.112 [WS-CLEANUP] done 00:30:20.114 [Pipeline] } 00:30:20.128 [Pipeline] // stage 00:30:20.133 [Pipeline] } 00:30:20.146 [Pipeline] // node 00:30:20.150 [Pipeline] End of Pipeline 00:30:20.204 Finished: SUCCESS